FIO Bash Script For Linux Storage Testing

Simple bash script I use for storage testing in Linux. The tool has two dependencies which are FIO and bc. On Debian or Ubuntu based systems:
sudo apt-get install fio bc

The script:

#!/bin/bash

# this script requires fio bc & jc

# Directory to test
TEST_DIR=$1

# Parameters for the tests should be representive of the workload you want to simulate
BS="1M"             # Block size
IOENGINE="libaio"   # IO engine
IODEPTH="16"        # IO depth sets how many I/O requests a single job can handle at once
DIRECT="1"          # Direct IO at 0 is buffered with RAM which may skew results and I/O 1 is unbuffered
NUMJOBS="5"         # Number of jobs is how many independent I/O streams are being sent to the storage
FSYNC="0"           # Fsync 0 leaves flushing up to Linux 1 force write commits to disk
NUMFILES="5"        # Number of files is number of independent I/O threads or processes that FIO will spawn
FILESIZE="1G"       # File size for the tests, you can use: K M G

# Check if directory is provided
if [ -z "$TEST_DIR" ]; then
    echo "Usage: $0 [directory]"
    exit 1
fi

# Function to perform FIO test and display average output
perform_test() {
    RW_TYPE=$1

    echo "Running $RW_TYPE test with block size $BS, ioengine $IOENGINE, iodepth $IODEPTH, direct $DIRECT, numjobs $NUMJOBS, fsync $FSYNC, using $NUMFILES files of size $FILESIZE on $TEST_DIR"

    # Initialize variables to store cumulative values
    TOTAL_READ_IOPS=0
    TOTAL_WRITE_IOPS=0
    TOTAL_READ_BW=0
    TOTAL_WRITE_BW=0

    for ((i=1; i<=NUMFILES; i++)); do
        TEST_FILE="$TEST_DIR/fio_test_file_$i"

        # Running FIO for each file and parsing output
        OUTPUT=$(fio --name=test_$i \
                     --filename=$TEST_FILE \
                     --rw=$RW_TYPE \
                     --bs=$BS \
                     --ioengine=$IOENGINE \
                     --iodepth=$IODEPTH \
                     --direct=$DIRECT \
                     --numjobs=$NUMJOBS \
                     --fsync=$FSYNC \
                     --size=$FILESIZE \
                     --group_reporting \
                     --output-format=json)

        # Accumulate values
        TOTAL_READ_IOPS=$(echo $OUTPUT | jq '.jobs[0].read.iops + '"$TOTAL_READ_IOPS")
        TOTAL_WRITE_IOPS=$(echo $OUTPUT | jq '.jobs[0].write.iops + '"$TOTAL_WRITE_IOPS")
        TOTAL_READ_BW=$(echo $OUTPUT | jq '(.jobs[0].read.bw / 1024) + '"$TOTAL_READ_BW")
        TOTAL_WRITE_BW=$(echo $OUTPUT | jq '(.jobs[0].write.bw / 1024) + '"$TOTAL_WRITE_BW")
    done

   # Calculate averages
    AVG_READ_IOPS=$(echo "$TOTAL_READ_IOPS / $NUMFILES" | bc -l)
    AVG_WRITE_IOPS=$(echo "$TOTAL_WRITE_IOPS / $NUMFILES" | bc -l)
    AVG_READ_BW=$(echo "$TOTAL_READ_BW / $NUMFILES" | bc -l)
    AVG_WRITE_BW=$(echo "$TOTAL_WRITE_BW / $NUMFILES" | bc -l)

    # Format and print averages, omitting 0 results
    [ "$(echo "$AVG_READ_IOPS > 0" | bc)" -eq 1 ] && printf "Average Read IOPS: %'.2f\n" $AVG_READ_IOPS
    [ "$(echo "$AVG_WRITE_IOPS > 0" | bc)" -eq 1 ] && printf "Average Write IOPS: %'.2f\n" $AVG_WRITE_IOPS
    [ "$(echo "$AVG_READ_BW > 0" | bc)" -eq 1 ] && printf "Average Read Bandwidth (MB/s): %'.2f\n" $AVG_READ_BW
    [ "$(echo "$AVG_WRITE_BW > 0" | bc)" -eq 1 ] && printf "Average Write Bandwidth (MB/s): %'.2f\n" $AVG_WRITE_BW

}

# Run tests
perform_test randwrite
perform_test randread
perform_test write
perform_test read
perform_test readwrite

# Clean up
for ((i=1; i<=NUMFILES; i++)); do
    rm "$TEST_DIR/fio_test_file_$i"
done

Video covering how to use the tool

Hello,
I’m trying to use your script directly on TrueNAS-SCALE-23.10.0.1
bc is not installed by default so I modified it to use only jq (and fio)
I notice that the number of IOPS and MB/s are equals (the same problem in your video).
Is it OK to use .jobs[0].read.io_kbytes / 1024 instead of .jobs[0].read.bw / 1024 ?
Best Regards,
Antonio

Tom - it also needs jq to be installed.

1 Like

Weird, I update the post to show that the script shows that jq is needed. But I am not sure about the IOPS. They are the same when doing a BS="1M" but not the same when choosing BS="64K" I will have to do some more testing.

my numbers are weird as well. I will check the script.

Running randwrite 
Average Write IOPS: 101.28 Average Write Bandwidth (MB/s): 101.28

Running randread 
Average Read IOPS: 483.46 Average Read Bandwidth (MB/s): 483.46

Running write 
Average Write IOPS: 116.23 Average Write Bandwidth (MB/s): 116.23

Running read 
Average Read IOPS: 504.96 Average Read Bandwidth (MB/s): 504.96

Running readwrite 
Average Read IOPS: 94.02 
Average Write IOPS: 98.30
Average Read Bandwidth (MB/s): 94.02
Average Write Bandwidth (MB/s): 98.30

Getting some strange results on writes.
Reads seem about right for a pool of spinning disks (2 VDEV with 3 wide RAIDZ1).

Running randwrite test with block size 1M, ioengine libaio, iodepth 16, direct 1, numjobs 5, fsync 0, using 5 files of size 1G on 
Average Write IOPS: 2,849.30
Average Write Bandwidth (MB/s): 2,849.30
Running randread test with block size 1M, ioengine libaio, iodepth 16, direct 1, numjobs 5, fsync 0, using 5 files of size 1G on 
Average Read IOPS: 138.47
Average Read Bandwidth (MB/s): 138.47
Running write test with block size 1M, ioengine libaio, iodepth 16, direct 1, numjobs 5, fsync 0, using 5 files of size 1G on 
Average Write IOPS: 8,108.69
Average Write Bandwidth (MB/s): 8,108.69
Running read test with block size 1M, ioengine libaio, iodepth 16, direct 1, numjobs 5, fsync 0, using 5 files of size 1G on 
Average Read IOPS: 664.34
Average Read Bandwidth (MB/s): 664.34

I was traveling the last couple days and have not had time to sort the script out yet.

Hi Tom.

I run Ubuntu Server 22.04.03 LTS and additionally needed jq. This script helped me verify my NFS share at 10gbps for 64K & 128K to my TrueNAS server dataset set for 128K record sizes. I’m very pleased with this - thank you.

1 Like