[Linux application] disk IO read / write test tool -fio details

ToToSun 2022-06-23 18:29:47 阅读数:939

linuxapplicationdiskioread

1.FIO brief introduction

FIO yes Linux The next open source IOPS Testing tools , It is mainly used for stress testing and performance verification of disks .
It can generate many threads or processes to execute user specific types of I/O operation , By writing job files ( Be similar to k8s Of yaml) Or direct the command to execute the test action , It's like a Multithreading io Generation tool , Used to generate multiple IO Mode to test the performance of hard disk devices ( In most cases, it is used to test the performance of bare disk ).

2. Hard disk I/O Test type

  • random block read 、 Write at random
  • Sequential reading 、 Sequential writing
    (fio During the test, it can be set according to the requirements 70% read ,30% Write or 100% Read, wait )

3.FIO Installation and use

github Address :https://github.com/axboe/fio
Download and install :

$ yum -y install libaio-devel
# install libaio engine , Otherwise, execute fio Will be submitted to the “fio: engine libaio not loadable”, Must be in fio Install before installation , Otherwise, you have to compile and install again fio
$ wget https://github.com/axboe/fio/archive/refs/tags/fio-3.10.zip
$ cd /root/fio-fio-3.10
$ ./configure
$ mke && make install

Introduction to common parameters

-filename=/dev/sdb # Name of the disk to be tested , Support file system or bare device ,/dev/sda2 or /dev/sdb
-direct=1 # The test process bypasses the machine's own buffer, Make the test results more realistic (Linux In reading and writing , The data will be written to the cache first , Then write to the hard disk in the background , When reading, priority is given to reading from the cache , This will speed up access , But once the power is off , The data in the cache will be emptied , All modes are DirectIO, You can skip caching , Read and write directly to the hard disk )
-ioengine=libaio # Define what to use io The engine goes to issue io request , Common ones libaio:Linux Local asynchronous I/O;rbd: adopt librbd Direct access CEPH Rados
-iodepth=16 # The depth of the queue is 16, In asynchronous mode ,CPU You can't send commands to hard disk devices indefinitely . such as SSD Perform read and write if a jam occurs , It's possible that the system will keep sending commands , Thousands , Even tens of thousands , On the one hand SSD I can't carry it , On the other hand, so many commands will take up a lot of memory , The system is going to hang up . such , This brings a parameter called queue depth .
-bs=4k # A single io The block file size of is 4k
-numjobs=10 # The number of threads tested this time is 10
-size=5G # The amount of data read and written by each thread is 5GB
-runtime=60 # Test time is 60 second , You can set 2m For two minutes . If you do not configure this , Will set the size Until all sizes are written or read
-rw=randread # Test random reading I/O
-rw=randwrite # Test randomly written I/O
-rw=randrw # Test a random mix of write and read I/O
-rw=read # Test sequential reading I/O
-rw=write # The test sequence is written I/O
-rw=rw # Test the sequence of mixed write and read I/O
-thread # Use pthread_create Create thread , The other is fork Create a process . The process is more expensive than the thread , It is generally used thread test
rwmixwrite=30 # In mixed read-write mode , Write to account for 30%( namely rwmixread Read as 70%, Such a parameter can be configured separately )
-group_reporting # About displaying results , Aggregate information for each process
-name="TDSQL_4KB_read_test" # Define the test task name
Expand
-lockmem=1g # Use only 1g Memory for testing
-zero_buffers # Use all 0 Initialize buffer , The default is to fill the buffer with random data
-random_distribution=random # By default ,fio Will use a completely uniform random distribution when asking , If necessary, you can customize the access area ,zipf、pareto、normal、zoned
-nrfiles=8 # Number of files generated per process

Common commands :

 Write tests
fio -filename=/dev/md5 -rw=write -direct=1 -bs=1M -size=64G -numjobs=6 -ioengine=psync
Reading test
fio -filename=/dev/md5 -rw=read -direct=1 -bs=1M -size=64G -numjobs=6 -ioengine=psync

4 Test scenarios

100% random block read ,5G size ,4k Block file :

fio -filename=/dev/sdb \
-direct=1 -ioengine=libaio \
-bs=4k -size=5G -numjobs=10 \
-iodepth=16 -runtime=60 \
-thread -rw=randread -group_reporting \
-name="TDSQL_4KB_randread_test"

100% Sequential reading ,5G size ,4k Block file :

fio -filename=/dev/sdb \
-direct=1 -ioengine=libaio \
-bs=4k -size=5G -numjobs=10 \
-iodepth=16 -runtime=60 \
-thread -rw=read -group_reporting \
-name="TDSQL_4KB_write_test"

70% random block read ,30% Write at random ,5G size ,4k Block file :

fio -filename=/dev/sdb \
-direct=1 -ioengine=libaio \
-bs=4k -size=5G -numjobs=10 \
-iodepth=16 -runtime=60 \
-thread -rw=randrw -rwmixread=70 \
-group_reporting \
-name="TDSQL_4KB_randread70-write_test"

70% Sequential reading ,30% Write at random ,5G size ,4k Block file :

fio -filename=/dev/sdb \
-direct=1 -ioengine=libaio \
-bs=4k -size=5G -numjobs=10 \
-iodepth=16 -runtime=60 \
-thread -rw=rw -rwmixread=70 \
-group_reporting \
-name="TDSQL_4KB_read70-write_test"

In fact, the key point is to set the appropriate reading and writing type and reading and writing proportion

Output report
Here's a simple test with the local virtual machine ( Naked disk test )

[[email protected] fio-fio-3.10]# fio -filename=/dev/sdb \

-direct=1 -ioengine=libaio
-bs=4k -size=5G -numjobs=10
-iodepth=16 -runtime=60
-thread -rw=randrw -rwmixread=70
-group_reporting
-name=“local_randrw_test”
local_randrw_test: (g=0): rw=randrw, bs= 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=16

fio-3.10
Starting 10 threads
Jobs: 10 (f=10): [m(10)][100.0%][r=19.4MiB/s,w=8456KiB/s][r=4969,w=2114 IOPS][eta 00m:00s]
local_randrw_test: (groupid=0, jobs=10): err= 0: pid=11189: Mon Oct 25 11:01:46 2021
read: IOPS=5230, BW=20.4MiB/s (21.4MB/s)(1226MiB/60031msec)
slat (usec): min=2, max=342637, avg=1266.82, stdev=7241.29
clat (usec): min=4, max=459544, avg=20056.81, stdev=24888.90
lat (usec): min=134, max=459586, avg=21329.16, stdev=25378.16
clat percentiles (usec):
| 1.00th=[ 1467], 5.00th=[ 1844], 10.00th=[ 2147], 20.00th=[ 2606],
| 30.00th=[ 3032], 40.00th=[ 3556], 50.00th=[ 4359], 60.00th=[ 6063],
| 70.00th=[ 36439], 80.00th=[ 46924], 90.00th=[ 51643], 95.00th=[ 59507],
| 99.00th=[105382], 99.50th=[117965], 99.90th=[137364], 99.95th=[152044],
| 99.99th=[219153]
bw ( KiB/s): min= 795, max= 4494, per=9.91%, avg=2072.23, stdev=744.04, samples=1195
iops : min= 198, max= 1123, avg=517.74, stdev=186.00, samples=1195
write: IOPS=2243, BW=8972KiB/s (9188kB/s)(526MiB/60031msec)
slat (usec): min=2, max=311932, avg=1272.76, stdev=7272.09
clat (usec): min=6, max=458031, avg=20206.30, stdev=24897.71
lat (usec): min=974, max=459755, avg=21484.12, stdev=25400.41
clat percentiles (usec):
| 1.00th=[ 1500], 5.00th=[ 1860], 10.00th=[ 2147], 20.00th=[ 2606],
| 30.00th=[ 3064], 40.00th=[ 3621], 50.00th=[ 4424], 60.00th=[ 6194],
| 70.00th=[ 36439], 80.00th=[ 46924], 90.00th=[ 51643], 95.00th=[ 59507],
| 99.00th=[105382], 99.50th=[117965], 99.90th=[137364], 99.95th=[149947],
| 99.99th=[200279]
bw ( KiB/s): min= 357, max= 1944, per=9.90%, avg=888.57, stdev=325.49, samples=1195
iops : min= 89, max= 486, avg=221.80, stdev=81.37, samples=1195
lat (usec) : 10=0.01%, 50=0.01%, 100=0.01%, 250=0.02%, 500=0.01%
lat (usec) : 750=0.01%, 1000=0.01%
lat (msec) : 2=7.45%, 4=38.36%, 10=18.10%, 20=1.09%, 50=22.31%
lat (msec) : 100=11.42%, 250=1.24%, 500=0.01%
cpu : usr=0.26%, sys=19.41%, ctx=12026, majf=0, minf=18
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=313975,134655,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
READ: bw=20.4MiB/s (21.4MB/s), 20.4MiB/s-20.4MiB/s (21.4MB/s-21.4MB/s), io=1226MiB (1286MB), run=60031-60031msec
WRITE: bw=8972KiB/s (9188kB/s), 8972KiB/s-8972KiB/s (9188kB/s-9188kB/s), io=526MiB (552MB), run=60031-60031msec
Disk stats (read/write):
sdb: ios=314008/134653, merge=0/0, ticks=189470/89778, in_queue=279286, util=99.75%

Output report analysis
The following is the data direction of each execution I/O Meaning of representative value of statistical data information

read/write: read / Written IO operation ( One more trim Never used )
salt: Submission delay , This is a submission I/O The time it takes (min: minimum value ,max: Maximum ,avg: Average ,stdev: standard deviation )
chat: Completion delay , Indicates from submission to completion I/O Part of the time
lat: Corresponding time , From fio establish I/O Unit to completion I/O Time of operation
bw: Bandwidth statistics
iops: IOPS Statistics
lat(nsec/usec/msec): I/O Distribution of completion delay . This is from I/O Leave fio By the time it's finished . Read separately from the above / Write / The trim part is different , The data here and the rest apply to all of the reporting group I/ o.10=0.01% signify 0.01% Of I/O stay 250us Following completion .250=0.02% signify 0.02% Of I/O need 10 To 250us To complete .
cpu: cpu Usage rate
IO depths: I/O The distribution of depth in the job life cycle
IO submit: How many... Are committed in a commit call I/O. Each entry represents the amount and below , Until the last entry —— for example ,4=100% Means that every time we submit 0 To 4 individual I/O call
IO complete: And the one above submit equally , But this is how many
IO issued rwt: Emitted read/write/trim Number of requests , And how many of these requests have been shortened or deleted
IO latency: Required to meet the specified delay target I/O depth

Here is Run status group 0 (all jobs) The representative value meaning of all task summary information :

bw: Total bandwidth and minimum and maximum bandwidth
io: The cumulative execution of all threads in this group I/O
run: The smallest and longest runtimes in this set of threads .

And finally Linux The meaning of the representative value of the unique disk status statistics in :

ios: Of all groups I/ o Number
merge: I/O The total number of merges performed by the scheduler
ticks: The number of ticks that keep the disk busy ( For reference only , The original is Number of ticks we kept the disk busy)
in_queue: Total time spent in disk queue
util: Disk utilization . The value is 100% It means we keep the disk , If you've been busy , that 50% The disk will be idle for half the time
版权声明:本文为[ToToSun]所创,转载请带上原文链接,感谢。 https://javamana.com/2022/174/202206231732258084.html