Migrated Jobs From Desktop no Longer follow Retention Rules on SPX

I migrated from Desktop to SPX around 10 days ago, which automatically imported my two backup jobs. These jobs were setup to delete all prior backups (full image and incremental) on two internal drives before beginning a new full image backup every two weeks. The deletions are not occurring which is causing the backup jobs to fail.

SPX is installed on a home PC running Windows 10 64 bit. There are two primary drives, a 1TB system SSD and a 4TB data HDD. The 1TB drive is backed up to an internal 1.5TB HDD and the 4TB HDD is backed up to another internal 4TB HDD. The 1TB system drive is at 80% capacity and the 4TB data drive is at 70% capacity. There are individual backup jobs for both drives and both are set to full images every 14 days and incremental backups every 12 hours. These setting have been in use for over three years under Desktop without issue.

Last night, the first full image backup was scheduled under SPX and it failed with the following error:

Backup "E Drive Backup" on NewDesktop failed backing up to F:\ (F:\). I/O error.

Examining drive “F” reveals that all the prior backups are still there and there is 1TB of free space which is not enough to contain another full image.

The imported jobs have the same retention settings under SPX that were automatically setup by the migration. The retention radio button is set to “Keep only recent images” and the retention number is set to “1” under “Retain X most recent image sets”. The sub setting “Perform deletions before a full backup” is checked but the entire line is greyed out. I can get the sub setting to become active by raising the retain number from “1” to “2” or higher. However, dropping it back down to “1” causes the sub setting to again be greyed out, but this time, it is no longer checked.

How do I get these jobs to perform in the same manner that they did under Desktop?




Migrated Jobs From Desktop no Longer follow Retention Rules on S

Hi warc1,

In order to dig into this it would help us out if you gathered the DTX diagnostics from the system in question and created a case at http://www.storagecraft.com/support/support-request. The option to delete backups prior to the job running while only keeping one set is normal in SPX - they opted to have that work as it is to avoid losing data if a system goes down before that full backup is complete.

In order for us to see why the retention isn't running, though, we would need a case and diagnostics.


There is one clarification I

There is one clarification I would request. My jobs were set up to retain zero prior backups under Desktop since all prior backups were deleted immediately prior to each new full image. Are you saying that this has been changed under SPX so that at least one prior full image must be kept? If so, that is the issue since my internal backup drives do not have room for two images.

I fully recognize the risk of my approach that, should a drive fail during backup, there would actually be no backup available on the PC. That is why I have a third external 4TB drive that I manually backup once a month and store offsite. I also have a Backblaze account to do daily incremental backups to the cloud. In summary, the SPX backups provide the easiest recovery options, while the offsite storage and Backblaze account provide more onerous recovery options for low risk, but high impact loss events.


That is correct, you can't

That is correct, you can't set your retention to 0 in SPX. What compression are you using on your backup job currently? If possible maybe you could increase that to allow for both backups to fit on the disk?


Compression is set to

Compression is set to "standard". Most of the data is comprised of photos and videos that are already in compressed formats so further compression is not going to reconver the necessary space. If backups need to retain one prior copy, can backups be spanned across multiple disks? My situation arose because my 4TB data disk was the largest consumer drive available when I built my PC so backup drives could be no larger.

Finally, just for my understanding, having zero retentions appears to correspond to a retention number setting of "1" in SPX with the subsetting “Perform deletions before a full backup” checked. This is how my backup jobs were automatically migrated to SPX. This seems to imply that the Desktop functionality that allowed zero retentions was intended to be grandfathered into SPX, but you cound not set up new jobs with this setting. That is because SPX  requires a retention setting of "2" or higher before you are permitted to check "Perform deletions before a full backup" on a new job. If my understanding is not correct, this means that I have to retain two full images with each new image and thus requiring a backup drive three times the capacity of the source drive.


If you want to delete prior

If you want to delete prior to backing up, that is correct. To maintain 1 set, you'll just need to delete after the backup. And we don't support spanning the backup over multiple disks.


I am in the process of

I am in the process of specing out a new PC based on a 2 TB system disk and 10 TB data disk. Splitting data across multiple disks creates issues for me that I would rather avoid. Since backups cannot span multiple disks, the workaround that eliminates the risk of disk failure during backup could be to buy 3 10 TB disks (one data, two backup) and set up two backup jobs that create independent backups on the two backup drives at different times. However, as I understand it, SPX will not allow me to automate this because it will not allow the advance deletion of an image, even though there is another one on a separate disk. Can this functionality not be restored to SPX since it remains a part of Desktop? 


2 TB system disk?

Am I reading that right? That's pretty hefty!


StorageCraft Certified Master Engineer

Veeam Technical Sales Professional (v9)


Yup. By system disk I mean OS

Yup. By system disk I mean OS and apps for which I admit that the biggest disk hogs are games. I'm keeping utilization of my current 1TB SSD to below 80% to ensure sufficient overhead for housekeeping. That means I've already started migrating some apps/games to the data HDD. For my new build,  a 2TB system disk is planned to ensure sufficient capacity for the projected life of the system. Future proofing is also why the data disk capacity will be bumped up from 4TB to 10TB.


Just to close this thread

Just to close this thread out, can I get confirmation of my understanding that there currently is no SPX functionality to automate the backup of large capacity consumer drives (e.g. 6, 8, and 10 TB drives). Further, that there is no intent to restore this functionality in SPX even though it existed in Desktop. I need to know if I have to start looking for a ShadowProtect replacement.


We fully support the ability

We fully support the ability to backup large drives and volumes, provided there is enough space to store the backups. The only functionality we removed was deleting the only remaining backup prior to a fresh backup to avoid leaving our customers without backups in the event of a failure.


Which means that 6, 8 and

Which means that 6, 8 and 10TB drives are not viable in an SPX backup environment becuase the required 12, 16 and 20 TB backup drives do not exist as consumer devices. Customers can easily address your stated rationale for barring this by using two backup dirves with independent backup jobs. However, StorageCraft has decided to take that choice away even though you could qualify and disclaim the hell out it if you are that concerned about improper use or liability.

In summary, I don't understand the business model of eliminating customer choice solely to address a risk that is completely avoidable. 


SPX > New job > Advanced > Start job script

I've not tested this out, but couldn't you use the Start job script function within SPX?


When the job starts, a batch file executes to quick format the volume? Containing something like:

@echo off
format z: /fs:ntfs /v:Backups /q /force


Think I'll go off and test this myself now with a job.


StorageCraft Certified Master Engineer

Veeam Technical Sales Professional (v9)


Works for me.

This works for me.

1. Created formatzvol.bat batch file with following example:

@echo off
format z: /fs:ntfs /v:Backups /q /force
mkdir Z:\Backups\TEST


2. Place batch file in the following location:



3. Created backup job: test

- All settings as you require

- You might need to first manually create the Backup destination, for SPX to register it as usable in your job

(advanced tab)

-check 'Start job script'

-choose formatzvol.bat

-check Fail backup if script is unsuccessful


4. Executed backup job

- The batch file will first attempt to force a quick format of the volume

- The batch file also will create the folder destination for your configured backup job, this has to match what is configured in the job

- SPX waits for the batch file to succeed, then executes the backup on a clean volume


Will be interested to hear how you get on with this.



StorageCraft Certified Master Engineer

Veeam Technical Sales Professional (v9)


Thank you very much for

Thank you very much for investigating my issue and providing detailed instructions on a potential workaround. I will give it a shot tonight and report back.


Works fine for full backups.

Works fine for full backups. I'm assuming it won't work for mixed full, increemental backups since it appears that the statup script formats the drive everytime the job is run so it  would not be possible to build up an incremented chain. Regardless, being able to automate full backups is better than nothing. 

Thanks again.



I'm glad it worked out for you in respect of the full backups. Formatting the volume is a bit of a sledgehammer, but it's fairly quick.

You should be able to create and execute a script tailored a bit more to your needs if you use all the correct logic and syntax for commands.

Depends on how much time you want to spend on it I guess.

The 'del' command might be more what you're after. Just exercise caution. Worth playing around with unimportant stuff first.

cmd > del /?

gives you the following out put and switches:

Deletes one or more files.

DEL [/P] [/F] [/S] [/Q] [/A[[:]attributes]] names
ERASE [/P] [/F] [/S] [/Q] [/A[[:]attributes]] names

  names         Specifies a list of one or more files or directories.
                Wildcards may be used to delete multiple files. If a
                directory is specified, all files within the directory
                will be deleted.

  /P            Prompts for confirmation before deleting each file.
  /F            Force deleting of read-only files.
  /S            Delete specified files from all subdirectories.
  /Q            Quiet mode, do not ask if ok to delete on global wildcard
  /A            Selects files to delete based on attributes
  attributes    R  Read-only files            S  System files
                H  Hidden files               A  Files ready for archiving
                I  Not content indexed Files  L  Reparse Points
                -  Prefix meaning not

If Command Extensions are enabled DEL and ERASE change as follows:

The display semantics of the /S switch are reversed in that it shows
you only the files that are deleted, not the ones it could not find.


For example, something like this should only delete incremental (and consolidated) .spi files throughout subfolders across the volume, but not the full .spf

@echo off



del /s /q /f *.spi


Just bear in mind caution and testing.


StorageCraft Certified Master Engineer

Veeam Technical Sales Professional (v9)

Terms and Conditions of Use - Privacy Policy - Cookies