Prevent having to buy extra by reclaiming unused storage
A standard ServiceNow instance can grow up to 4TB before ServiceNow comes knocking on your door that your instance is getting too big and that you need to purchase extra capacity to store your database.
No worries. 4TB is a lot and I’ve worked with customers that are running their instance for decades and haven’t really bothered with extra cleanup besides some old tickets and the standard cleanup you get OOB.
But recently we were asked to look at the instance of a client that already purchased extra storage and was talking with ServiceNow to extend even that extra storage. Their database had grown to 6.9TB. The teams working on the platform were already implementing extra cleanup rules to get rid of any old records (and attachments!) they were no longer required to keep but that didn’t result in reducing the database footprint enough to prevent extra cost.
There is some documentation on database footprints (please check out the extensive library of Community articles written by Mark Roethof), but an article from Dominik Simunek I came across when not even looking for it, turned out to be a huge treasure for us (link at the bottom of this blog). He gave us insight in ServiceNow’s Database Compaction that gave us back 2TB of storage room on our instance (yes, TWO TERABYTES, that’s no typo).
And all we had to do was to add some system properties to the instance and tweak them so they would be optimized for our tables. Please read on if you are curious on how you could do the same (if necessary).
Databases grow. Data gets added and deleted. But with deleting data you don’t automatically get back the space the data was on. Database Compaction does give you back that space.
TABLES
There are a couple of tables you can utilize to see what’s going on with your instance, related to the size of the database.
[sys_physical_table_stats] – this table shows you the table size in GB, row count in 1000s and the estimated number of GB you can reclaim. Be aware that these values may differ from the actual data you see on the mentioned table or on your database footprint overview on NowSupport.
[sys_schema_change] – this table shows you changes to tables through different Alter Types. If you select ‘Compact Table’ as Alter Type, you will see the tables that are changes by the Database Compaction job.
[sys_compaction_run] – this table shows you the tables that the Database Compaction ran on. It shows the start- and end time and (for us) most importantly the reason why a table wasn’t compacted (the job checks every table to see if compaction is required).
PROPERTIES
The [sys_compaction_run] table will show you why a table isn’t compacted, and that is based on several conditions that are all managed by default settings. These settings can be changed (without any risk on performance issues, we found out). To change them, you will need to create some system properties. These don’t exist OOB for some reason.
| Property | Description | Default value |
| glide.db.compaction.criteria.reclaim_size_mb | Minimum reclaim size (MBs) for a table to be eligible for the database compaction | 10240 |
| glide.db.compaction.max_tables_compacted_timeframe_days | Number of days for the maximum tables compacted timeframe | 1 |
| glide.db.compaction.criteria.max_table_size_mb | Maximum table size (MBs) for a table to be eligible for database compaction | 102400 (100 GB) |
| glide.db.compaction.criteria.reclaim_percentage | Minimum reclaim percentage (%) for the table to be eligible for database compaction | 50 |
| glide.db.compaction.criteria.max_row_count | Maximum row count for the table to be eligible for database compaction | 100000000 (100 million) |
| glide.db.compaction.max_tables_compacted | Maximum number of tables to be compacted within the defined timeframe | 5 |
With the default settings, 5 tables with a maximum of 100 million records and a maximum size of 100 GB will be compacted every day if they can at least reclaim 10 GB of space and that reclaim space must be at least 50% of the table size.
As you can imagine, this will do some Database Compaction on an instance, but also excludes many tables, because of the conditions.
We have updated the values on the properties to include a bit more tables and reclaim a bit faster. With a max of 8 tables per day that had a maximum of 500 million rows and a max size of 1 TB, and a reduction of the minimum percentage to 10 and the reclaim size to 5 GB, we were stunned with the results.
Within 2 weeks we reclaimed 2 TB of space just by creating and optimizing 6 system properties. A reduction of 28%, making our client compliant again.
Are you curious about which tables are the largest on your instance? Check out the OOB ‘Telemetry – Table Growth’ dashboard. But, as with everything, if you want to discuss things with ServiceNow, get your data from the Database footprint data on NowSupport. The data on your instance is an indication. The exact overview can be found there.
Dominik Simunek’s blog (highly recommended read!)
ServiceNow Database Compaction

