Leveling Up with osquery for Workloads: Determining Free Disk Space on Linux & macOS

March 21, 2022

Audit and Remediation provide direct access to osquery functionality within the VMware Carbon Black Cloud console to enable security, compliance, and IT teams to query over 2,000 individual attributes across endpoints and workloads. osquery can help teams with gathering information at scale across environments for IT and help desk operations, compliance and M&A reporting, incident response, and security investigations.  

Audit and Remediation allow administrators to ask questions about the environment across hardware, software, and network variables at scale. VMware Carbon Black Cloud Workload customers have access to the full Audit and Remediation capabilities. In this blog series, we’ll be laying out relevant queries for VMware Carbon Black Cloud Workload customers to use to achieve a variety of use cases. All queries are available in the VMware Carbon Black Cloud console, or in the VMware Carbon Black User Exchange.  

Requirements:  

This blog series is intended for readers that have a basic understanding of SQLite and have an osquery test environment. If neither of these things is true for you, please take a moment to read the Audit and Remediation Best Practices Guide before reading the rest of the blog.  

Resources  

Free Disk Space on Linux/macOS  

System administrators, auditors, and help desk analysts are frequently asked to quickly determine how much free disk space their endpoints and servers have at any point in time. They might need to know this information for troubleshooting purposes or to proactively look for potential problems or anomalies. This blog post will show how we can pull this information in Mac and Linux environments and use some on-the-fly math to generate percentages for the amount of disk space in use.  

To construct this query we are going to use the mounts table, so let’s look at some sample data:  

osquery> select * from mounts where path = '/';  
          device = /dev/disk1s1s1  
    device_alias = /dev/disk1s1s1  
            path = /  
            type = apfs  
     blocks_size = 4096  
          blocks = 244199454  
     blocks_free = 50984380  
blocks_available = 50984380  
          inodes = 2039952463  
     inodes_free = 2039375200  
           flags = 1149292545  

 The column data that we will be using are blocks, blocks_available, and block_size. You may notice blocks_free and wonder what the difference is between it and blocks_available. The latter is the number of free blocks minus the number of reserved blocks, so that makes it the appropriate column to use for our needs.  Because all the data is reported in blocks, we will use the block_size to convert it to bytes. First, we need to take the total number of blocks, subtract the number available, and then multiply that difference by the block size. Then to convert to gigabytes (GB) we need to divide by the size of a GB (1073741824):  

osquery> select ((blocks - blocks_available) * blocks_size)/1073741824 AS used_space_gb from mounts where path ='/';  
+---------------+  
| used_space_gb |  
+---------------+  
| 737           |  
+---------------+  

When you do math with SQLite, and by extension osquery and Live Query, floating-point numbers are not used. To use them you need to multiply one of the values by 1.0.   

osquery> select ((blocks - blocks_available * 1.0) * blocks_size)/1073741824 AS used_space_gb from mounts where path ='/';  
+------------------+  
| used_space_gb    |  
+------------------+  

| 737.064117431641 |  
+------------------+  

  Now while that is nice, I would like to limit the number to just two decimal places. We can do that with the SQLite function printf().   

osquery> select printf('%.2f',((blocks - blocks_available * 1.0) * blocks_size)/1073741824) AS used_space_gb from mounts where path ='/';  
+---------------+  
| used_space_gb |  
+---------------+  
| 737.07        |  
+---------------+  

  As you can see printf() also properly rounds up the decimal places. Now we can use this same method to see the amount of space left and the size of the disk.  

osquery> select printf('%.2f',((blocks - blocks_available * 1.0) * blocks_size)/1073741824) AS used_space_gb,  
    ...>     printf('%.2f',(1.0 * blocks_available * blocks_size / 1073741824)) AS space_left_gb,  
    ...>     printf('%.2f',(1.0 * blocks * blocks_size / 1073741824)) AS total_space_gb  
    ...> from mounts  
    ...> where path ='/';  
+---------------+---------------+----------------+  
| used_space_gb | space_left_gb | total_space_gb |  
+---------------+---------------+----------------+  
| 737.08        | 194.46        | 931.55         |  
+---------------+---------------+----------------+  

Now it is pretty trivial to convert the amount of space left to a percentage by converting the blocks to GB and dividing the available space by the total size and multiplying by 100.  

osquery> select printf('%.2f',((blocks - blocks_available * 1.0) * blocks_size)/1073741824) as used_space_gb,  
    ...>     printf('%.2f',(1.0 * blocks_available * blocks_size / 1073741824)) as space_left_gb,  
    ...>     printf('%.2f',(1.0 * blocks * blocks_size / 1073741824)) as total_space_gb,  
    ...>     printf('%.2f',(((blocks - blocks_available * 1.0) * blocks_size)/1073741824)/(1.0 * blocks * blocks_size / 1073741824)) * 100 as '%_used',  
    ...>     printf('%.2f',(1.0 * blocks_available * blocks_size / 1073741824)/(1.0 * blocks * blocks_size / 1073741824)) * 100 as '%_available'  
    ...> from mounts  
    ...> where path = '/';  
+---------------+---------------+----------------+--------+-------------+  
| used_space_gb | space_left_gb | total_space_gb | %_used | %_available |  
+---------------+---------------+----------------+--------+-------------+  
| 737.12        | 194.42        | 931.55         | 79.0   | 21.0        |  
+---------------+---------------+----------------+--------+-------------+  

This query is available as a recommended query in Live Query on the VMware Carbon Black Cloud.  

This blog post has shown how we can easily use inherent SQLite capabilities to do quick calculations to add context to a query to make it more powerful, and enable help desk analysts, security teams, system administrators, and auditors to quickly and at scale identify disks that are dangerously full.  

To learn more about how Audit and Remediation capabilities can help you gather data at scale across your environments, check out the resources below and read the other blogs in this series: 

  1. Carbon Black Cloud Workload TestDrive  

  1. VMware Carbon Black Tech Zone: Mastering Audit & Remediation  

  1. Query Exchange (Carbon Black User Exchange)  

  1. Leveling Up with osquery for Workloads blogs 

Associated Content

From the action bar MORE button.

Filter Tags

VMware Carbon Black Cloud Audit and Remediation Blog EPP