I/O Schedulers

I/O Schedulers (Updated 19/12/16)

My official XDA thread is here: Official XDA Thread

Recommended apps for manipulating kernel values:
1. Kernel Adiutor (Free to change scheduler and tune variables)
2. Kernel Adiutor-Mod (Free to change scheduler and tune variables)
3. Compatible kernel managers (e.g Stweaks, Synapse, UKM, etc.)


This pages includes:

- Descriptions
- Recommendations
- Comparisons
- Graphs
- Tunables

Note to people who want to reuse this information: There have been a few websites that have included my information in their own threads. Please make sure to get appropriate credits to the original authors (including myself) that way there will be less problems! Read the policy for more information.

Why change your phones I/O Scheduler?

Most phone manufacturers keep your phones I/O Schedulers locked so users are unable to modify any values which could change the performance of your phone. However, once your phone is rooted, you can change these values allowing the potential to boost your phones performance and even slightly increase battery life. Here is a thorough guide on all of the common i/o schedulers.

What is an I/O Scheduler:

Input/output (I/O) scheduling is a term used to describe the method computer operating systems decide the order that block I/O operations will be submitted to storage volumes. I/O Scheduling is sometimes called 'disk scheduling'.

I/O schedulers can have many purposes depending on the goal of the I/O scheduler, some common goals are:
  • To minimise time wasted by hard disk seeks.
  • To prioritise a certain processes' I/O requests.
  • To give a share of the disk bandwidth to each running process.
  • To guarantee that certain requests will be issued before a particular deadline.

Which schedulers are available? 
  • CFQ 
  • Deadline 
  • VR 
  • Noop
  • BFQ
  • SIO (Simple)
  • ROW
  • ZEN
  • SIOplus
  • FIFO
  • Tripndroid
  • Test
  • Maple 

Things to look out for in an I/O scheduler:

There are many I/O schedulers available on android, but there are some important things people should look out for before selecting their new scheduler:

- Some schedulers are known to be slower than others and some faster. A number of factors can affect speed including the simplicity of the algorithm used in the scheduler or the prioritisation of certain requests (e.g. async reads). 

Battery life 
- Generally, if a scheduler tries to be fair (like CFQ), it will try to share I/O resources equally and so it is possible that battery life may decrease. It is important to note that I/O schedulers have minimal impact on battery life!!!

- Older and simpler schedulers (like Noop) are usually more stable than newer and complex schedulers. This is also affected by other factors such as the implementation of the scheduler by your kernel maintainer/developer. 

- Often confused with speed, it refers to how "smooth" or the lack of delay that occurs when switching between apps or navigating throughout the UI. An scheduler that is fast may not necessarily be smooth. Prioritisation of read requests (found in ROW) or async reads (VR and ZEN) will increase smoothness. 


Completely Fair Queuing scheduler maintains a scalable per-process I/O queue and attempts to distribute the available I/O bandwidth equally among all I/O requests. Each per-process queue contains synchronous requests from processes. Time slice allocated for each queue depends on the priority of the 'parent' process. V2 of CFQ has some fixes which solves process' i/o starvation and some small backward seeks in the hope of improving responsiveness.

- Has a well balanced I/O performance
- Excellent on multiprocessor systems 
- Regarded as a stable I/O scheduler
- Good for multitasking


Some users report media scanning takes longest to complete using CFQ. This could be because of the property that since the bandwidth is equally distributed to all i/o operations during boot-up, media scanning is not given any special priority.
- Jitter (worst case delay) can sometimes be very high because the number of competing with each other process tasks
- Under constant load, the phone will experience increased I/O latency due to the way how the scheduler tries to create 'fairness'

The bottom line:  One of the best all-rounder I/O schedulers available. CFQ is better suited for traditional hard disks, however it may give better throughput under some situations.


The goal of the Deadline scheduler is to attempt to guarantee a start service time for a request. It does that by imposing a deadline on all I/O operations to prevent starvation of requests. It also maintains two deadline queues, in addition to the sorted queues (both read and write). Deadline queues are basically sorted by their deadline (the expiration time), while the sorted queues are sorted by the sector number.

Before serving the next request, the Deadline scheduler decides which queue to use. Read queues are given a higher priority, because processes usually block on read operations. Next, the Deadline scheduler checks if the first request in the deadline queue has expired. Otherwise, the scheduler serves a batch of requests from the sorted queue. In both cases, the scheduler also serves a batch of requests following the chosen request in the sorted queue.

- Nearly a real-time scheduler. 
- Excels in reducing latency of any given single I/O 
- Best scheduler for database access and queries. 
- Does quite well in benchmarks, most likely the best
- Like noop, a good scheduler for solid state/flash drives

- If the phone is overloaded, crashing or unexpected closure of processes can occur 

The bottom line: A good all-round scheduler. If you want good performance, you should try deadline. 

The ROW I/O scheduler was developed with the mobile devices needs in mind. In mobile devices, we favor user experience upon everything else, thus we want to give READ I/O requests as much priority as possible. In mobile devices we won't have as much parallel threads as on desktops. Usually it's a single thread or at most 2 simultaneous working threads for read & write. Favoring READ requests over WRITEs decreases the READ latency greatly. The main idea of the ROW scheduling policy is: If there are READ requests in pipe - dispatch them but don't starve the WRITE requests too much.


- Faster UI navigation and better overall phone experience
- Faster boot times and app launch times


- Not great for heavy multitasking
- Slower write speeds

The bottom line: It is a good all-round scheduler despite being biased to read operations. Your device may feel more responsive after selecting ROW because it was designed for mobile devices. Older devices may see more of a boost in performance compared to newer devices.

SIO (Simple):
Simple I/O aims to keep minimum overhead to achieve low latency to serve I/O requests. No priority queue concepts, but only basic merging. SIO is a mix between noop & deadline. No reordering or sorting of requests.


- It is simple and stable.
- Minimized starvation for inquiries

- Slow random write speeds on flash drives as opposed to other schedulers.
- Sequential read speeds on flash drives are not as good as other IO schedulers
- Not the best scheduler for benchmarks 

The bottom line: One of my favourite schedulers, it is a good all-round scheduler. People who want better performance should avoid using this.   

Inserts all the incoming I/O requests to a First In First Out queue and implements request merging. Best used with storage devices that does not depend on mechanical movement to access data (yes, like our flash drives). Advantage here is that flash drives does not require reordering of multiple I/O requests unlike in normal hard drives.


- Serves I/O requests with least number of CPU cycles.
- Best for flash drives since there is no seeking penalty.
- Good data throughput on db systems
- Does great in benchmarks
- Is very reliable


- Reducing the number of CPU cycles corresponds to a simultaneous decline in performance 
- Not the most responsive I/O scheduler
- Not very good at multitasking (especially heavy workloads)

The bottom line: Modern smartphones now use Noop as the default scheduler due to the fact that it works quite well with flash based storage. However older devices may experience slower performance when selected. If you want a very simple I/O scheduler algorithm (because of battery life or latency reasons), you can select this.

Unlike other scheduling software, synchronous and asynchronous requests are not handled separately, but it will impose a fair and balanced within this deadline requests, that the next request to be served is a function of distance from the last request.


- Generally excels in random writes.


- Performance variability can lead to different results (Only performs well sometimes)
- Sometimes unstable and unreliable

The bottom line: Not the best scheduler to select. You will probably find that other schedulers are performing better while being more stable. 

Instead of time slices allocation by CFQ, BFQ assigns budgets. Disk is granted to an active process until it's budget (number of sectors) expires. BFQ assigns high budgets to non-read tasks. Budget assigned to a process varies over time as a function of it's behavior.


- Has a very good USB data transfer rate.
- The best scheduler for playback of HD video recording and video streaming (due to less jitter than CFQ Scheduler, and others)
- Regarded as a very precise working Scheduler
- Delivers 30% more throughput than CFQ
- Good for multitasking, more responsive than CFQ


- Not the best scheduler for benchmarks 
- Higher budgets that were allocated to a process that can affect the interactivity and bring with it increased latency.

The bottom line: There are better schedulers out there that will perform better than BFQ. It is quite a complex scheduler that is better designed for traditional hard disks. 

ZEN is based on the Noop, Deadline and SIO I/O schedulers. It's an FCFS (First come, first serve) based algorithm, but it's not strictly FIFO. ZEN does not do any sorting. It uses deadlines for fairness, and treats synchronous requests with priority over asynchronous ones. Other than that, it's pretty much the same as Noop blended with VR features.

ZEN V2 is an optimized version of the original ZEN scheduler tuned by kernel developer DorimanX. It has been modified to work better with android devices.


- Well rounded IO Scheduler
- Very efficient IO Scheduler
- More stable than VR, more polished

- Performance variability can lead to different results (Only performs well sometimes)

The bottom line: It is pretty much a better version of VR, performs quite well and is stable. Overall this is a good choice for most smartphones. 

Based on the original SIO scheduler with improvements. Functionality for specifying the starvation of async reads against sync reads; starved write requests counter only counts when there actually are write requests in the queue; fixed a bug). 


- Better read and write speeds than previous SIO scheduler

- Fluctuations in performance may be observed
Not found in all kernels

The bottom line: If you liked SIO, you will like SIOplus. It performs slightly better in some usage case scenarios, but performance seekers should look else where. 

This new I/O scheduler is designed around the following assumptions about Flash-based storage devices: no I/O seek time, read and write I/O cost is usually different from rotating media, time to make a request depends upon the request size, and high through-put and higher IOPS with low-latency. FIOPS (Fair IOPS) ioscheduler tries to fix the gaps in CFQ. It's IOPS based, so it only targets for drive without I/O seek. It's quite similar like CFQ, but the dispatch decision is made according to IOPS instead of slice.


- Achieves high read and write speeds in benchmarks
- Faster app launching time and overall UI experience

- Not the most responsive IO scheduler (Can make phone lag)
- Not good at heavy multitasking  

The bottom line: Most people who use FIOPS will get a noticeable performance improvement. However, you may get issues with scrolling and general lags. 

FIFO (First in First Out):
First in First Out Scheduler. As the name says, it implements a simple priority method based on processing the requests as they come in.


- Serves I/O requests with least number of CPU cycles.
- Best for flash drives since there is no seeking penalty.
- Good data throughput on db systems


- Reducing the number of CPU cycles corresponds to a simultaneous decline in performance 
- Not very good at multitasking

The bottom line: Like Noop, but is less common. If you want a very simple I/O scheduler algorithm (because of battery life or latency reasons), you can select this.

A new I/O scheduler based on Noop, deadline and vr and meant to have minimal overhead. Made by TripNRaVeR

- Great at IO performance and everyday multitasking
- Well rounded and efficient IO scheduler
- Very responsive I/O scheduler (Compared to FIOPS)

- Performance varies between different devices (Some devices perform really well)

The bottom line: Tripndroid isn't really common, there are other schedulers you can choose which may perform similar or better. 

The test I/O scheduler is a duplicate of the Noop scheduler with addition of test utility. It allows testing a block device by dispatching specific requests according to the test case and declare PASS/FAIL according to the requests completion error code.

- Same as Noop, but can be beneficial to kernel developers

- Same as Noop

The bottom line: Shouldn't really be used by anyone. You should be using Noop instead of this.  

Maple is based on the Zen and Simple I/O schedulers. It uses ZEN's first-come-first-serve style algorithm with separate read/write requests and improved former/latter request handling from SIO. Maple is biased towards handling asynchronous requests before synchronous, and read requests before write. While this can have negative aspects on write intensive tasks like file copying, it slightly improves UI responsiveness. When the device is asleep, maple increases the expiry time of requests so that it can handle them more slowly, causing less overhead.

- Well rounded IO Scheduler
- Very efficient IO Scheduler

- Performance varies between different devices (Some devices perform really well)

The bottom line: This is still a very new I/O scheduler which should perform slightly better than ZEN. It will continue to improve with more development.

I/O Read-Ahead Buffer

If you've used a custom kernel, you probably have heard of a term called Read Ahead Buffer or Cache. It's basically a cache for files that have been opened recently on your mobile device, so that they can be quickly accessed again if needed. By android default, this value has been set to 128kB. Usually having more buffer means that more files can be cached, this can mean higher read and write speeds, but also this can result in more I/O latency. There is a point where increasing the I/O read ahead will have no benefit to read/write speeds.

Have a look at the graph below:

Read-ahead buffer comparison


I/O Read Ahead Buffer is dependent on the size of your flash storage (internal/external) unlike I/O schedulers. Below is the recommended settings for the given size that will yield the best performance (differs between setups).

Less than 8GB - 128KB
8GB - 512KB
16GB - 1024KB
32GB or above - 2048KB 

Any setting above what I have recommended may yield no extra performance!

If you have issues such as failed reads and writes after changing these values, try a smaller value. Please note that some SD cards may experience issues after setting a higher buffer value.

What to remember:
- More isn't always better!
- Some SD cards can't handle high read ahead cache values, so make sure you have a genuine high quality SD card
- Default is good enough for most people, but isn't the best for performance
- Performance difference varies between devices

Source: http://andrux-and-me.blogspot.com.au/2014/06/various-conditions-and-io-performance.html


Results :

Phone: Sony Xperia Z2
Scheduler: as per indicated
Read Ahead: 512kB
App: AndroBench 4

Here is a graph of the performance of the i/o schedulers. Note: a higher score doesn't mean it is the best io scheduler. These numbers mean nothing in real world performance, so take the following a mere glimpse of the performance of schedulers.

Sequential in MB/sec (Higher is better)
I/O scheduler Sequential Performance

Random in IOPS (Higher is better)

I/O scheduler Random R/W performance

Thanks haldi for the graphs!

Source:  http://andrux-and-me.blogspot.com.au/2014/05/io-schedulers-and-performance-2.html and http://forum.xda-developers.com/showpost.php?p=58807943&postcount=85

Recommended IO schedulers:

For everyday usage:

- ZEN (First choice)
- ROW (Second choice)
- SIO (Third choice)
- Noop
- Deadline

For battery life:

- Noop (First choice)
- FIOPS (Second choice)
- SIO (Third choice)
- ROW (Forth choice)

For gaming: 

- Deadline (First choice)
- ZEN  (Second choice)
- ROW (Third choice)
- CFQ 

For performance(Benchmarking):

- FIOPS (First choice) 
- Deadline (Second choice)
- Noop

For heavy multitasking:
- BFQ (First choice)

- CFQ (Second choice)
- Deadline (Third choice)

IO Scheduler Comparison

Overall performance:

FIOPS > Noop > ZEN > Tripndroid > SIO > ROW SIOplus > VR > Deadline > BFQ > CFQ

Multitasking performance:

Less Apps<------------------------------------------------------------>Many Apps
Noop < FIFO < FIOPS SIO  SIOplus ROW < Tripndroid < ZEN < Deadline < VR <  CFQ BFQ

Battery life:

Best<-------------------------------------------------------------------------> Worst
Noop > FIFO > FIOPS > SIO > SIOplus > ROW >  ZEN > Tripndroid > Deadline > VR > CFQ > BFQ

In the end, the best i/o governor can not be easily be decided from anyone on the internet, therefore you will need to choose a scheduler that would satisfy your needs and one that you think works the best.

I/O scheduler tunables:

Deadline and SIO: 
fifo_batch: This parameter controls the maximum number of requests per batch.It tunes the balance between per-request latency and aggregate throughput. When low latency is the primary concern, smaller is better (where a value of 1 yields first-come first-served behavior). Increasing fifo_batch generally improves throughput, at the cost of latency variation. The default is 16.

front_merges: A request that enters the scheduler is possibly contiguous to a request that is already on the queue. Either it fits in the back of that request, or it fits at the front. Hence it’s called either a back merge candidate or a front merge candidate. Typically back merges are much more common than front merges. You can set this tunable to 0 if you know your workload will never generate front merges. Otherwise leave it at its default value 1.

read_expire: In all 3 schedulers, there is some form of deadline to service each Read Request. The focus is read latencies. When a read request first enters the io scheduler, it is assigned a deadline that is the current time + the read_expire value in units of milliseconds. The default value is 500 ms.

write_expire: Similar to Read_Expire, this applies only to the Write Requests. The default value is 5000 ms.

writes_starved: Typically more attention is given to the Read requests over write requests. But this can’t go on forever. So after the expiry of this value, some of the pending write requests get the same priority as the Reads. Default value is 1.
This tunable controls how many read batches can be processed before processing a single write batch. The higher this is set, the more preference is given to reads.


In some cases, the overhead of I/O events contributing to the entropy pool for /dev/random is measurable. In such cases, it may be desirable to set this value to 0.


This tunable is primarily a debugging aid. Most workloads benefit from request merging (even on faster storage such as SSDs). In some cases, however, it is desirable to disable merging, such as when you want to see how many IOPS a storage back-end can process without disabling read-ahead or performing random I/O.


If you have a latency-sensitive application, then you should consider lowering the value of nr_requests in your request queue and limiting the command queue depth on the storage to a low number (even as low as 1), so that writeback I/O cannot allocate all of the available request descriptors and fill up the device queue with write I/O. Once nr_requests have been allocated, all other processes attempting to perform I/O will be put to sleep to wait for requests to become available. This makes things more fair, as the requests are then distributed in a round-robin fashion (instead of letting one process consume them all in rapid succession).


In some circumstances, the underlying storage will report an optimal I/O size. This is most common in hardware and software RAID, where the optimal I/O size is the stripe size. If this value is reported, applications should issue I/O aligned to and in multiples of the optimal I/O size whenever possible.


Traditional hard disks have been rotational (made up of spinning platters). SSDs, however, are not. Most SSDs will advertise this properly. If, however, you come across a device that does not advertise this flag properly, it may be necessary to set rotational to 0 manually; when rotational is disabled, the I/O elevator does not use logic that is meant to reduce seeks, since there is little penalty for seek operations on non-rotational media.

I/O completions can be processed on a different CPU from the one that issued the I/O. Setting rq_affinity to 1 causes the kernel to deliver completions to the CPU on which the I/O was issued. This can improve CPU data caching effectiveness.

back_seek_max: The scheduler tries to guess that the next request for access requires going backwards from current position on the Disc. Given that such going back can be time consuming. So in anticipation, may move back on the disc prior to the next request. This setting, given in Kb, determines the max distance to go back. Default value is set to 16 Kb.
Do note that in a cellphone or tablet, the storage is actually Flash Memory technology. There is Disk head to be re-positioned. As such this is not that effective as backward reads are not that bad.

back_seek_penalty: This parameter is used to compute the cost of backward seeking. If the backward distance of a request is just 1 from a front request, then the seeking cost of the two requests is considered equivalent and the scheduler will not bias toward one or the other. This parameter defaults to 2 so if the distance is only 1/2 of the forward distance, CFQ will consider the backward request to be close enough to the current head location to be “close”. Therefore it will consider it as a forward request.

fifo_expire_async & fifo_expire_sync : This particular parameter is used to set the timeout of asynchronous requests. CFQ maintains a fifo (first-in, first-out) list to manage timeout requests. The default value is 250 ms. A smaller value means the timeout is considered much more quickly than a larger value. Similarly, fifo_expire_sync applies to the Synchronous requests. The default is 125 ms.

group_idle: If this is set, CFQ will idle before executing the last process issuing I/O in a cgroup. This should be set to 1 along with using proportional weight I/O cgroups and setting slice_idle to 0 as Flash memory is a fast storage mechanism.

group_isolation: If set (to 1), there is a stronger isolation between groups at the expense of throughput. If disabled, Scheduler is biased towards sequential requests. When enabled group isolation provides balance for both sequential and random workloads. The default value is 0 (disabled).

low_latency: When set (to 1), CFQ attempts to build a backlog of write requests. It will give a maximum wait time of 300 ms for each process issuing I/O on a device. This offers fairness over throughput. When disabled (set to 0), it will ignore target latency, allowing each process in the system to get a full time slice. This is enabled by default.

Quantum: This option controls the maximum number of requests being processed at a time. The default value is 8. Increasing the value can improve performance; the latency of some I/O may be increased due to more requests being buffered inside the storage.

slice_async: This parameter controls Maximum number of asynchronous requests at a time. The default value is set to 40 ms.

slice_idle: When a task has no more requests to submit in its time slice, the scheduler waits for a while before scheduling the next thread to improve locality. The default value is 0 indicating no idling. However, a zero value increases the overall number of seeks. Hence a Non-zero number may be beneficial.

slice_sync: This setting determines the time slice allotted to a process I/O. The default is 100 ms.

timeout_sync & timeout_async: These parameters determine maximum disk time given to a task, respectively for synchronous and asynchronous queues. It allows the user to control the latencies imposed by the scheduler.

max_budget: This determines, how much of the queue request is serviced based on number of sectors on disc. A larger value increases the throughput for the single tasks and for the system, in proportion to the percentage of sequential requests issued. Consequence is increasing the maximum latency a request may incur in. The default value is 0, which enables auto-tuning

max_budget_async_rq: This setting determines number of async queues served for a maximum number of requests, before selecting a new queue.

low_latency: When this is set to 1 (default is 1), interactive and soft real-time applications experience a lower latency.

hp_read_quantum: Dispatch quantum for the high priority READ queue. Default: 10

rp_read_quantum: Dispatch quantum for the regular priority READ queue. Default: 100

hp_swrite_quantum: Dispatch quantum for the high priority Synchronous WRITE queue. Default: 1

rp_swrite_quantum: Dispatch quantum for the regular priority Synchronous WRITE queue. Default: 1

rp_write_quantum: Dispatch quantum for the regular priority WRITE queue. Default: 1

lp_read_quantum: Dispatch quantum for the low priority READ queue. Default: 1

lp_swrite_quantum: Dispatch quantum for the low priority Synchronous WRITE queue. Default: 1

read_idle: Determines length of idle on read queue in Msec (in case idling is enabled on that queue). Default: 5ms

read_idle_freq: Determines the frequency of inserting READ requests that will trigger idling. This is the time in Msec between inserting two READ requests. Default: 5ms
VR and Zen:

rev_penalty: Penalty for reversing head direction.

fifo_batch: Number of requests to issue before checking for expired requests.

sync_expire: Deadline for synchronous requests.

async_expire: Deadline for asynchronous requests.

Thanks to perseus for his awesome tunable guide! Credits should go to him!

Source: xda-developers, 


  1. thanks for this article, very helpful deciding my best schedule =)

  2. Best article for schedulers on date yet. Thanks!

  3. By far the best article I've read regarding I/O schedulers. Thank you!

  4. This article is so good.. Been doing some much better experimenting now with the old Note 2.. Thank you

  5. Very very good indepth and detailed...i had not a clue what they all memt untill i read this lol... now my phone runs even more silky smooth...keep up the great work

  6. This is the best site I have ever seen about scheduler extremely helpful!
    Extremely Thankful!

  7. Best site for choosing governors&schedulers and TCP

  8. Thank you for taking the time to write this. Very useful.

  9. Excellent piece of information! Just what I was looking for.. thanks a lot !

  10. Brilliant piece of information on IO SCHEDULERS. Thank you...!

  11. big thanks! !����
    before i can't use my usb flash drive on my Samsung s4... but i read this post... and set my i/o scheduler to noop. ..
    And then!!! Tadan!!! Working. .. tnx again����

  12. Everytime i confuse about these things, i always read this site.... Helpful thanks


Please respect the admin of the website and don't post spam! Spam will be removed through moderation so you will be wasting time if you do so! Post something meaningful that will help the developer or others :) If spam becomes a problem, I will remove the ability to create comments once again.

Total Pageviews