Choosing the best suited I/O scheduler and algorithm not only depends on the workload, but on the hardware, too. Single ATA disk systems, SSDs, RAID arrays, or network storage systems, for example, each require different tuning strategies.
The default Io scheduling option can be validated using below file
# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]
As what I have observed on SLES the default IO scheduler is 'cfq' while for Red Hat Enterprise Linux 7 it is [deadline]. The steps of this article is validated on SLES 11 and RHEL 7
In this article I won't convey the usage of each scheduler and algorithm, so assuming you know which scheduler serves your purpose and you want the same change to be persistent across reboot.
By default the changes what you perform for the algorithms are only for the current session.
The algorithms term I use here means the values which you set for individual scheduler. For eg
CFQ has below algorithms
# ls -l /sys/block/sda/queue/iosched/
total 0
-rw-r--r-- 1 root root 4096 Jul 11 10:54 back_seek_max
-rw-r--r-- 1 root root 4096 Jul 11 10:54 back_seek_penalty
-rw-r--r-- 1 root root 4096 Jul 11 10:54 fifo_expire_async
-rw-r--r-- 1 root root 4096 Jul 11 10:54 fifo_expire_sync
-rw-r--r-- 1 root root 4096 Jul 11 10:54 group_idle
-rw-r--r-- 1 root root 4096 Jul 11 10:54 low_latency
-rw-r--r-- 1 root root 4096 Jul 11 10:54 quantum
-rw-r--r-- 1 root root 4096 Jul 11 10:54 slice_async
-rw-r--r-- 1 root root 4096 Jul 11 10:54 slice_async_rq
-rw-r--r-- 1 root root 4096 Jul 11 10:54 slice_idle
-rw-r--r-- 1 root root 4096 Jul 11 10:54 slice_sync
So suppose you would like to modify these values, but the changes you make will be active only for the current session and if you want to make this permanent and persistent across reboots, follow the below steps
For the sake of this article I would like to change the default "low_latency" value of "0" to "1"
Lets validate the default value of low_latency
# cat /sys/block/sda/queue/iosched/low_latency
0
If you have more than one disk then for each disk you will have different section, the above I showed was for sda
similarly for sdb since I have two disks software raid node
# cat /sys/block/sdb/queue/iosched/low_latency
0
Navigate to your rules directory
# cd /lib/udev/rules.d/
Search for any rule file (if not already) which is setting the IO scheduler values. if there is no such file then create a new one
# touch 60-ssd-scheduler.rules
Add below content to your new file
ACTION=="add|change", KERNEL=="sd[a-z]", TEST!="queue/iosched/low_latency", ATTR{queue/scheduler}="cfq"
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/iosched/low_latency}="1", ATTR{queue/scheduler}="cfq"
Save and exit the file
Here we are requesting a "add/change" to all the available disk (sd[a-z]) with two change
- Change the IO scheduler to 'cfq'
- Change the low_latency to '1'
Similarly you can add rules for more algorithm changes using this file.
Next validate if your changes are working by executing below command
# udevadm test /sys/block/sda
(This will throw a long output, look out for below lines)
ATTR '/sys/devices/pci0000:00/0000:00:15.0/0000:03:00.0/host0/target0:0:0/0:0:0:0/block/sda/queue/scheduler' writing 'cfq' /usr/lib/udev/rules.d/60-ssd-scheduler.rules:1
ATTR '/sys/devices/pci0000:00/0000:00:15.0/0000:03:00.0/host0/target0:0:0/0:0:0:0/block/sda/queue/iosched/low_latency' writing '1' /usr/lib/udev/rules.d/60-ssd-scheduler.rules:2
ATTR '/sys/devices/pci0000:00/0000:00:15.0/0000:03:00.0/host0/target0:0:0/0:0:0:0/block/sda/queue/scheduler' writing 'cfq' /usr/lib/udev/rules.d/60-ssd-scheduler.rules:2
and next validate the changes on your existing session
# cat /sys//block/sda/queue/iosched/low_latency
1
# cat /sys//block/sdb/queue/iosched/low_latency
1
Lastly proceed with the reboot just to be sure your changes are present even after a reboot.
I hope the article was useful.