Issue: On a protected system, some disks are listed on the details pane but do not show up in file browser when mounted, and are not available via iSCSI export. Missing disks are also not available when virtualizing the system.
Cause: missing <Disk GUID>.meta file for the protected system in /tank/admin/<Server GUID>
Resolution: Recreate the missing meta file
flask
repenv
python
import dataMigration
dataMigration.create_missing_metafiles()
<<<Old manual way>>>
Procedure:
Verify that no backup is currently in progress for the affected system
Stop and disable the agent service on the protected system
Ssh in to the affected appliance and change directory to /tank/admin/<Server GUID>
(Note, Server GUID is listed on the details pane of the protected server)
List the directory and identify any and all disk image files that do not have a corresponding .meta file
Copy an existing meta file to a new file named <Missing Disk GUID>.meta
(Note Disk GUID is displayed on the protected system details pane under volume details
If there is no existing .meta file to copy, create a new file in the following format with no trailing spaces or blank lines
{“mountPointName”: “C”, “isBootable”: true, “VolumeStartOffset”: 0}
Replace the values for drive letter, isBootable true/false and volume offset with the correct values for the affected disk
How to determine VolumeStartOffset
Generally, x360Recover creates a dedicated image for each Windows volume that has a drive letter, and the offset for these disks will be 0. (Even when a physical disk is carved into multiple partitions, as long as all partitions are NTFS and have a drive letter, separate disk image files will be created and the offset will be 0)
However, if there are any non-windows partitions or special partitions (like the Windows SRP partition) the value for Offset will not be 0. Follow the steps below to determine the Offset
Note: If necessary refer to Disk Management Console (diskmgmt.msc) to visually see partitions while running DISKPART commands. . .
From an elevated command prompt on the protected system run DISKPART
Perform the following:
List Disk
Select Disk x (where x is the number listed for the affected disk)
List Partition
Select Partition x (where x is the number listed for the affected drive letter)
Detail partition
Set the VolumeStartOffset as the value displayed “Offset in Bytes” for the partition
Save the .meta file and repeat this process for any additional missing drives
Once finished, exit the ssh shell
Enable and start the agent service on the protected system
Perform an immediate backup and monitor the progress. If all is well, the backup will complete successfully and access to the missing disks will be restore for this and future snapshots
If the backup fails, there is something wrong with the meta files you just created and they will be deleted automatically. Repeat the process above one drive at a time and verify disk offset values until the backups complete successfully
How to access data on the prior recovery points
From the GUI, perform a mount operation for the desired recovery point to expose the snapshot data on the Appliance.
(Note, the missing disks will not be available from the File Browser)
SSH into the Appliance and perform the following
Determine the mounted system path
df –h (This will display all mounted disk volumes in a friendly fashion)
The mounted recovery point will be located at /tank/admin/<Server GUID>_<year>_<month>_<day>_<hour>_<minute>_clone
cd <recovery point path>
ls (display all disk volume images)
Create a place to mount the missing disk file
mkdir /tmp/DRIVEx (where x is the drive letter you want to access)
Mount the missing volume on Linux
mount –o loop /tank/admin/<recovery point path>/<Disk GUID> /tmp/DRIVEx
(Note Disk GUID is listed on the details pane of the protected server under volume details.)
Once mounted, access the files using WinSCP or other utility
Connect to the appliance IP address and navigate to /tmp/DRIVEx to access the files
When finished, don’t forget to clean up!
umount /tmp/DRIVEx (this removes the manual disk mount)
close the ssh shell
From the GUI unmounts the recovery point