History log of /linux-5.15/block/blk-throttle.c (Results 1 – 25 of 465)
Revision Date Author Comments
# f2006e27 12-Jul-2013 Thomas Gleixner <tglx@linutronix.de>

Merge branch 'linus' into timers/urgent

Get upstream changes so we can apply fixes against them

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>


# 36805aae 11-Jul-2013 Linus Torvalds <torvalds@linux-foundation.org>

Merge branch 'for-3.11/core' of git://git.kernel.dk/linux-block

Pull core block IO updates from Jens Axboe:
"Here are the core IO block bits for 3.11. It contains:

- A tweak to the reserved tag

Merge branch 'for-3.11/core' of git://git.kernel.dk/linux-block

Pull core block IO updates from Jens Axboe:
"Here are the core IO block bits for 3.11. It contains:

- A tweak to the reserved tag logic from Jan, for weirdo devices with
just 3 free tags. But for those it improves things substantially
for random writes.

- Periodic writeback fix from Jan. Marked for stable as well.

- Fix for a race condition in IO scheduler switching from Jianpeng.

- The hierarchical blk-cgroup support from Tejun. This is the grunt
of the series.

- blk-throttle fix from Vivek.

Just a note that I'm in the middle of a relocation, whole family is
flying out tomorrow. Hence I will be awal the remainder of this week,
but back at work again on Monday the 15th. CC'ing Tejun, since any
potential "surprises" will most likely be from the blk-cgroup work.
But it's been brewing for a while and sitting in my tree and
linux-next for a long time, so should be solid."

* 'for-3.11/core' of git://git.kernel.dk/linux-block: (36 commits)
elevator: Fix a race in elevator switching
block: Reserve only one queue tag for sync IO if only 3 tags are available
writeback: Fix periodic writeback after fs mount
blk-throttle: implement proper hierarchy support
blk-throttle: implement throtl_grp->has_rules[]
blk-throttle: Account for child group's start time in parent while bio climbs up
blk-throttle: add throtl_qnode for dispatch fairness
blk-throttle: make throtl_pending_timer_fn() ready for hierarchy
blk-throttle: make tg_dispatch_one_bio() ready for hierarchy
blk-throttle: make blk_throtl_bio() ready for hierarchy
blk-throttle: make blk_throtl_drain() ready for hierarchy
blk-throttle: dispatch from throtl_pending_timer_fn()
blk-throttle: implement dispatch looping
blk-throttle: separate out throtl_service_queue->pending_timer from throtl_data->dispatch_work
blk-throttle: set REQ_THROTTLED from throtl_charge_bio() and gate stats update with it
blk-throttle: implement sq_to_tg(), sq_to_td() and throtl_log()
blk-throttle: add throtl_service_queue->parent_sq
blk-throttle: generalize update_disptime optimization in blk_throtl_bio()
blk-throttle: dispatch to throtl_data->service_queue.bio_lists[]
blk-throttle: move bio_lists[] and friends to throtl_service_queue
...

show more ...


# 9138125b 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: implement proper hierarchy support

With the recent updates, blk-throttle is finally ready for proper
hierarchy support. Dispatching now honors service_queue->parent_sq
and propagates

blk-throttle: implement proper hierarchy support

With the recent updates, blk-throttle is finally ready for proper
hierarchy support. Dispatching now honors service_queue->parent_sq
and propagates correctly. The only thing missing is setting
->parent_sq correctly so that throtl_grp hierarchy matches the cgroup
hierarchy.

This patch updates throtl_pd_init() such that service_queues form the
same hierarchy as the cgroup hierarchy if sane_behavior is enabled.
As this concludes proper hierarchy support for blkcg, the shameful
.broken_hierarchy tag is removed from blkio_subsys.

v2: Updated blkio-controller.txt as suggested by Vivek.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Cc: Li Zefan <lizefan@huawei.com>

show more ...


# 693e751e 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: implement throtl_grp->has_rules[]

blk_throtl_bio() has a quick exit path for throtl_grps without limits
configured. It looks at the bps and iops limits and if both are not
configured,

blk-throttle: implement throtl_grp->has_rules[]

blk_throtl_bio() has a quick exit path for throtl_grps without limits
configured. It looks at the bps and iops limits and if both are not
configured, the bio is issued immediately. While this is correct in
the current flat hierarchy as each throtl_grp behaves completely
independently, it would become wrong in proper hierarchy mode. A
group without any limits could still be limited by one of its
ancestors and bio's queued for such group should not bypass
blk-throtl.

As having a quick bypass mechanism is beneficial, this patch
reimplements the mechanism such that it's correct even with proper
hierarchy. throtl_grp->has_rules[] is added. These booleans are
updated for the whole subtree whenever a config is updated so that
has_rules[] of the whole subtree stays synchronized. They're also
updated when a new throtl_grp comes online so that it can't escape the
limits of its ancestors.

As no throtl_grp has another throtl_grp as parent now, this patch
doesn't yet make any behavior differences.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 32ee5bc4 14-May-2013 Vivek Goyal <vgoyal@redhat.com>

blk-throttle: Account for child group's start time in parent while bio climbs up

With the planned proper hierarchy support, a bio will climb up the
tree before actually being dispatched. This makes

blk-throttle: Account for child group's start time in parent while bio climbs up

With the planned proper hierarchy support, a bio will climb up the
tree before actually being dispatched. This makes sure bio is also
subjected to parent's throttling limits, if any.

It might happen that parent is idle and when bio is transferred to
parent, a new slice starts fresh. But that is incorrect as parents
wait time should have started when bio was queued in child group and
causes IOs to be throttled more than configured as they climb the
hierarchy.

Given the fact that we have not written hierarchical algorithm in a
way where child's and parents time slices are synchronized, we
transfer the child's start time to parent if parent was idling. If
parent was busy doing dispatch of other bios all this while, this is
not an issue.

Child's slice start time is passed to parent. Parent looks at its
last expired slice start time. If child's start time is after parents
old start time, that means parent had been idle and after parent
went idle, child had an IO queued. So use child's start time as
parent start time.

If parent's start time is after child's start time, that means,
when IO got queued in child group, parent was not idle. But later
it dispatched some IO, its slice got trimmed and then it went idle.
After a while child's request got shifted in parent group. In this
case use parent's old start time as new start time as that's the
duration of slice we did not use.

This logic is far from perfect as if there are multiple childs
then first child transferring the bio decides the start time while
a bio might have queued up even earlier in other child, which is
yet to be transferred up to parent. In that case we will lose
time and bandwidth in parent. This patch is just an approximation
to make situation somewhat better.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Tejun Heo <tj@kernel.org>

show more ...


# c5cc2070 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: add throtl_qnode for dispatch fairness

With flat hierarchy, there's only single level of dispatching
happening and fairness beyond that point is the responsibility of the
rest of the b

blk-throttle: add throtl_qnode for dispatch fairness

With flat hierarchy, there's only single level of dispatching
happening and fairness beyond that point is the responsibility of the
rest of the block layer and driver, which usually works out okay;
however, with the planned hierarchy support,
service_queue->bio_lists[] can be filled up by bios from a single
source. While the limits would still be honored, it'd be very easy to
starve IOs from siblings or children.

To avoid such starvation, this patch implements throtl_qnode and
converts service_queue->bio_lists[] to lists of per-source qnodes
which in turn contains the bio's. For example, when a bio is
dispatched from a child group, the bio doesn't get queued on
->bio_lists[] directly but it first gets queued on the group's qnode
which in turn gets queued on service_queue->queued[]. When
dispatching for the upper level, the ->queued[] list is consumed in
round-robing order so that the dispatch windows is consumed fairly by
all IO sources.

There are two ways a bio can come to a throtl_grp - directly queued to
the group or dispatched from a child. For the former
throtl_grp->qnode_on_self[rw] is used. For the latter, the child's
->qnode_on_parent[rw].

Note that this means that the child which is contributing a bio to its
parent should stay pinned until all its bios are dispatched to its
grand-parent. This patch moves blkg refcnting from bio add/remove
spots to qnode activation/deactivation so that the blkg containing an
active qnode is always pinned. As child pins the parent, this is
sufficient for keeping the relevant sub-tree pinned while bios are in
flight.

The starvation issue was spotted by Vivek Goyal.

v2: The original patch used the same throtl_grp->qnode_on_self/parent
for reads and writes causing RWs to be queued incorrectly if there
already are outstanding IOs in the other direction. They should
be throtl_grp->qnode_on_self/parent[2] so that READs and WRITEs
can use different qnodes. Spotted by Vivek Goyal.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 2e48a530 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: make throtl_pending_timer_fn() ready for hierarchy

throtl_pending_timer_fn() currently assumes that the parent_sq is the
top level one and the bio's dispatched are ready to be issued;

blk-throttle: make throtl_pending_timer_fn() ready for hierarchy

throtl_pending_timer_fn() currently assumes that the parent_sq is the
top level one and the bio's dispatched are ready to be issued;
however, this assumption will be wrong with proper hierarchy support.
This patch makes the following changes to make
throtl_pending_timer_fn() ready for hiearchy.

* If the parent_sq isn't the top-level one, update the parent
throtl_grp's dispatch time and schedule the next dispatch as
necessary. If the parent's dispatch time is now, repeat the
function for the parent throtl_grp.

* If the parent_sq is the top-level one, kick issue work_item as
before.

* The debug message printed by throtl_log() now prints out the
service_queue's nr_queued[] instead of the total nr_queued as the
latter becomes uninteresting and misleading with hierarchical
dispatch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 6bc9c2b4 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: make tg_dispatch_one_bio() ready for hierarchy

tg_dispatch_one_bio() currently assumes that the parent_sq is the top
level one and the bio being dispatched is ready to be issued; howev

blk-throttle: make tg_dispatch_one_bio() ready for hierarchy

tg_dispatch_one_bio() currently assumes that the parent_sq is the top
level one and the bio being dispatched is ready to be issued; however,
this assumption will be wrong with proper hierarchy support. This
patch makes the following changes to make tg_dispatch_on_bio() ready
for hiearchy.

* throtl_data->nr_queued[] is incremented in blk_throtl_bio() instead
of throtl_add_bio_tg() so that throtl_add_bio_tg() can be used to
transfer a bio from a child tg to its parent.

* tg_dispatch_one_bio() is updated to distinguish whether its parent
is another throtl_grp or the throtl_data. If former, the bio is
transferred to the parent throtl_grp using throtl_add_bio_tg(). If
latter, the bio is ready to be issued and put on the top-level
service_queue's bio_lists[] and throtl_data->nr_queued is
decremented.

As all throtl_grps currently have the top level service_queue as their
->parent_sq, this patch in itself doesn't make any behavior
difference.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 9e660acf 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: make blk_throtl_bio() ready for hierarchy

Currently, blk_throtl_bio() issues the passed in bio directly if it's
within limits of its associated tg (throtl_grp). This behavior
becomes

blk-throttle: make blk_throtl_bio() ready for hierarchy

Currently, blk_throtl_bio() issues the passed in bio directly if it's
within limits of its associated tg (throtl_grp). This behavior
becomes incorrect with hierarchy support as the bio should be
accounted to and throttled by the ancestor throtl_grps too.

This patch makes the direct issue path of blk_throtl_bio() to loop
until it reaches the top-level service_queue or gets throttled. If
the former, the bio can be issued directly; otherwise, it gets queued
at the first layer it was above limits.

As tg->parent_sq is always the top-level service queue currently, this
patch in itself doesn't make any behavior differences.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 2a12f0dc 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: make blk_throtl_drain() ready for hierarchy

The current blk_throtl_drain() assumes that all active throtl_grps are
queued on throtl_data->service_queue, which won't be true once
hierar

blk-throttle: make blk_throtl_drain() ready for hierarchy

The current blk_throtl_drain() assumes that all active throtl_grps are
queued on throtl_data->service_queue, which won't be true once
hierarchy support is implemented.

This patch makes blk_throtl_drain() perform post-order walk of the
blkg hierarchy draining each associated throtl_grp, which guarantees
that all bios will eventually be pushed to the top-level service_queue
in throtl_data.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 6e1a5704 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: dispatch from throtl_pending_timer_fn()

Currently, blk_throtl_dispatch_work_fn() is responsible for both
dispatching bio's from throtl_grp's according to their limits and then
issuing

blk-throttle: dispatch from throtl_pending_timer_fn()

Currently, blk_throtl_dispatch_work_fn() is responsible for both
dispatching bio's from throtl_grp's according to their limits and then
issuing the dispatched bios.

This patch moves the dispatch part to throtl_pending_timer_fn() so
that the work item is kicked iff there are bio's to issue. This is to
avoid work item execution at each step when hierarchy support is
enabled. bio's will be dispatched towards the top-level service_queue
from the timers at each layer and the work item will only be used to
issue the bio's which reached the top-level service_queue.

While fetching bio's to issue from bio_lists[],
blk_throtl_dispatch_work_fn() fetches all READs before WRITEs. While
the original code also dispatched READs first, if multiple throtl_grps
are dispatched on the same run, WRITEs from throtl_grp which is
dispatched first would precede READs from throtl_grps which are
dispatched later. While this is a behavior change, given that the
previous code already prioritized READs and block layer generally
prioritizes and segregates READs from WRITEs, this isn't likely to
make any noticeable differences.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 7f52f98c 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: implement dispatch looping

throtl_select_dispatch() only dispatches throtl_quantum bios on each
invocation. blk_throtl_dispatch_work_fn() in turn depends on
throtl_schedule_next_dispa

blk-throttle: implement dispatch looping

throtl_select_dispatch() only dispatches throtl_quantum bios on each
invocation. blk_throtl_dispatch_work_fn() in turn depends on
throtl_schedule_next_dispatch() scheduling the next dispatch window
immediately so that undue delays aren't incurred. This effectively
chains multiple dispatch work item executions back-to-back when there
are more than throtl_quantum bios to dispatch on a given tick.

There is no reason to finish the current work item just to repeat it
immediately. This patch makes throtl_schedule_next_dispatch() return
%false without doing anything if the current dispatch window is still
open and updates blk_throtl_dispatch_work_fn() repeat dispatching
after cpu_relax() on %false return.

This change will help implementing hierarchy support as dispatching
will be done from pending_timer and immediate reschedule of timer
function isn't supported and doesn't make much sense.

While this patch changes how dispatch behaves when there are more than
throtl_quantum bios to dispatch on a single tick, the behavior change
is immaterial.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 69df0ab0 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: separate out throtl_service_queue->pending_timer from throtl_data->dispatch_work

Currently, throtl_data->dispatch_work is a delayed_work item which
handles both delayed dispatch and is

blk-throttle: separate out throtl_service_queue->pending_timer from throtl_data->dispatch_work

Currently, throtl_data->dispatch_work is a delayed_work item which
handles both delayed dispatch and issuing bios. The two tasks will be
separated to support proper hierarchy. To prepare for that, this
patch separates out the timer into throtl_service_queue->pending_timer
from throtl_data->dispatch_work and make the latter a work_struct.

* As the timer is now per-service_queue, it's initialized and
del_sync'd as its corresponding service_queue is created and
destroyed. The timer, when triggered, simply schedules
throtl_data->dispathc_work for execution.

* throtl_schedule_delayed_work() is renamed to
throtl_schedule_pending_timer() and takes @sq and @expires now.

* Simiarly, throtl_schedule_next_dispatch() now takes @sq, which
should be the parent_sq of the service_queue which just got a new
bio or updated. As the parent_sq is always the top-level
service_queue now, this doesn't change anything at this point.

This patch doesn't introduce any behavior differences.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 2a0f61e6 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: set REQ_THROTTLED from throtl_charge_bio() and gate stats update with it

With proper hierarchy support, a bio can be dispatched multiple times
until it reaches the top-level service_qu

blk-throttle: set REQ_THROTTLED from throtl_charge_bio() and gate stats update with it

With proper hierarchy support, a bio can be dispatched multiple times
until it reaches the top-level service_queue and we don't want to
update dispatch stats at each step. They are local stats and will be
kept local. If recursive stats are necessary, they should be
implemented separately and definitely not by updating counters
recursively on each dispatch.

This patch moves REQ_THROTTLED setting to throtl_charge_bio() and gate
stats update with it so that dispatch stats are updated only on the
first time the bio is charged to a throtl_grp, which will always be
the throtl_grp the bio was originally queued to.

This means that REQ_THROTTLED would be set even for bios which don't
get throttled. As we don't want bios to leave blk-throtl with the
flag set, move REQ_THROTLLED clearing to the end of blk_throtl_bio()
and clear if the bio is being issued directly.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# fda6f272 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: implement sq_to_tg(), sq_to_td() and throtl_log()

Now that both throtl_data and throtl_grp embed throtl_service_queue,
we can unify throtl_log() and throtl_log_tg().

* sq_to_tg() is a

blk-throttle: implement sq_to_tg(), sq_to_td() and throtl_log()

Now that both throtl_data and throtl_grp embed throtl_service_queue,
we can unify throtl_log() and throtl_log_tg().

* sq_to_tg() is added. This returns the throtl_grp a service_queue is
embedded in. If the service_queue is the top-level one embedded in
throtl_data, NULL is returned.

* sq_to_td() is added. A service_queue is always associated with a
throtl_data. This function finds the associated td and returns it.

* throtl_log() is updated to take throtl_service_queue instead of
throtl_data. If the service_queue is one embedded in throtl_grp, it
prints the same header as throtl_log_tg() did. If it's one embedded
in throtl_data, it behaves the same as before. This renders
throtl_log_tg() unnecessary. Removed.

This change is necessary for hierarchy support as we're gonna be using
the same code paths to dispatch bios to intermediate service_queues
embedded in throtl_grps and the top-level service_queue embedded in
throtl_data.

This patch doesn't make any behavior changes.

v2: throtl_log() didn't print a space after blkg path. Updated so
that it prints a space after throtl_grp path. Spotted by Vivek.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 77216b04 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: add throtl_service_queue->parent_sq

To prepare for hierarchy support, this patch adds
throtl_service_queue->service_sq which points to the arent
service_queue. Currently, for all serv

blk-throttle: add throtl_service_queue->parent_sq

To prepare for hierarchy support, this patch adds
throtl_service_queue->service_sq which points to the arent
service_queue. Currently, for all service_queues embedded in
throtl_grps, it points to throtl_data->service_queue. As
throtl_data->service_queue doesn't have a parent its parent_sq is set
to NULL.

There are a number of functions which take both throtl_grp *tg and
throtl_service_queue *parent_sq. With this patch, the parent
service_queue can be determined from @tg and the @parent_sq arguments
are removed.

This patch doesn't make any behavior differences.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 0e9f4164 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: generalize update_disptime optimization in blk_throtl_bio()

When blk_throtl_bio() wants to queue a bio to a tg (throtl_grp), it
avoids invoking tg_update_disptime() and
throtl_schedule

blk-throttle: generalize update_disptime optimization in blk_throtl_bio()

When blk_throtl_bio() wants to queue a bio to a tg (throtl_grp), it
avoids invoking tg_update_disptime() and
throtl_schedule_next_dispatch() if the tg already has bios queued in
that direction. As a new bio is appeneded after the existing ones, it
can't change the tg's next dispatch time or the parent's dispatch
schedule.

This optimization is currently open coded in blk_throtl_bio().
Whether the target biolist was occupied was recorded in a local
variable and later used to skip disptime update. This patch moves
generalizes it so that throtl_add_bio_tg() sets a new flag
THROTL_TG_WAS_EMPTY if the biolist was empty before the new bio was
added. tg_update_disptime() clears the flag automatically.
blk_throtl_bio() is updated to simply test the flag before updating
disptime.

This patch doesn't make any functional differences now but will enable
using the same optimization for recursive dispatch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 651930bc 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: dispatch to throtl_data->service_queue.bio_lists[]

throtl_service_queues will eventually form a tree which is anchored at
throtl_data->service_queue and queue bios will climb the tree

blk-throttle: dispatch to throtl_data->service_queue.bio_lists[]

throtl_service_queues will eventually form a tree which is anchored at
throtl_data->service_queue and queue bios will climb the tree to the
top service_queue to be executed.

This patch makes the dispatch paths in blk_throtl_dispatch_work_fn()
and blk_throtl_drain() to dispatch bios to
throtl_data->service_queue.bio_lists[] instead of the on-stack
bio_lists. This will keep the final dispatch to the top level
service_queue share the same mechanism as dispatches through the rest
of the hierarchy.

As bio's should be issued in a sleepable context,
blk_throtl_dispatch_work_fn() transfers all dispatched bio's from the
service_queue bio_lists[] into an onstack one before dropping
queue_lock and issuing the bio's.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 73f0d49a 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: move bio_lists[] and friends to throtl_service_queue

throtl_service_queues will eventually form a tree which is anchored at
throtl_data->service_queue and queue bios will climb the tre

blk-throttle: move bio_lists[] and friends to throtl_service_queue

throtl_service_queues will eventually form a tree which is anchored at
throtl_data->service_queue and queue bios will climb the tree to the
top service_queue to be executed.

This patch moves bio_lists[] and nr_queued[] from throtl_grp to its
service_queue to prepare for that. As currently only the
throtl_data->service_queue is in use, this patch just ends up moving
throtl_grp->bio_lists[] and ->nr_queued[] to
throtl_grp->service_queue.bio_lists[] and ->nr_queued[] without making
any functional differences.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 49a2f1e3 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: add throtl_grp->service_queue

Currently, there's single service_queue per queue -
throtl_data->service_queue. All active throtl_grp's are queued on the
queue and dispatched according

blk-throttle: add throtl_grp->service_queue

Currently, there's single service_queue per queue -
throtl_data->service_queue. All active throtl_grp's are queued on the
queue and dispatched according to their limits. To support hierarchy,
this will be expanded such that active throtl_grp's form a tree
anchored at throtl_data->service_queue and chained through each
intermediate throtl_grp's service_queue.

This patch adds throtl_grp->service_queue to prepare for hierarchy
support. The initialization function - throtl_service_queue_init() -
is added and replaces the macro initializer. The newly added
tg->service_queue isn't used yet. Following patches will do.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 0049af73 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: reorganize throtl_service_queue passed around as argument

throtl_service_queue will be the building block of hierarchy support
and will form a tree. This patch updates its usages as a

blk-throttle: reorganize throtl_service_queue passed around as argument

throtl_service_queue will be the building block of hierarchy support
and will form a tree. This patch updates its usages as arguments to
reduce confusion.

* When a service queue is used as the parent role - the host of the
rbtree - use @parent_sq instead of @sq.

* For functions taking both @tg and @parent_sq, reorder them so that
the order is (@tg, @parent_sq) not the other way around. This makes
the code follow the usual convention of specifying the primary
target of the operation as the first argument.

This patch doesn't make any functional differences.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# e2d57e60 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: pass around throtl_service_queue instead of throtl_data

throtl_service_queue will be used as the basic block to implement
hierarchy support. Pass around throtl_service_queue *sq inste

blk-throttle: pass around throtl_service_queue instead of throtl_data

throtl_service_queue will be used as the basic block to implement
hierarchy support. Pass around throtl_service_queue *sq instead of
throtl_data *td in the following functions which will be used across
multiple levels of hierarchy.

* [__]throtl_enqueue/dequeue_tg()

* throtl_add_bio_tg()

* tg_update_disptime()

* throtl_select_dispatch()

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 0f3457f6 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: add backlink pointer from throtl_grp to throtl_data

Add throtl_grp->td so that the td (throtl_data) a given tg
(throtl_grp) belongs to can be determined, and remove @td argument
from f

blk-throttle: add backlink pointer from throtl_grp to throtl_data

Add throtl_grp->td so that the td (throtl_data) a given tg
(throtl_grp) belongs to can be determined, and remove @td argument
from functions which take both @td and @tg as the former now can be
determined from the latter.

This generally simplifies the code and removes a number of cases where
@td is passed as an argument without being actually used. This will
also help hierarchy support implementation.

While at it, in multi-line conditions, move the logical operators
leading broken lines to the end of the previous line.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# 5b2c16aa 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: simplify throtl_grp flag handling

blk-throttle is still using function-defining macros to define flag
handling functions, which went out style at least a decade ago.

Just define the f

blk-throttle: simplify throtl_grp flag handling

blk-throttle is still using function-defining macros to define flag
handling functions, which went out style at least a decade ago.

Just define the flag as bitmask and use direct bit operations.

This patch doesn't make any functional changes.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


# c9e0332e 14-May-2013 Tejun Heo <tj@kernel.org>

blk-throttle: rename throtl_rb_root to throtl_service_queue

throtl_rb_root will be expanded to cover more roles for hierarchy
support. Rename it to throtl_service_queue and make its fields more
des

blk-throttle: rename throtl_rb_root to throtl_service_queue

throtl_rb_root will be expanded to cover more roles for hierarchy
support. Rename it to throtl_service_queue and make its fields more
descriptive.

* rb -> pending_tree
* left -> first_pending
* count -> nr_pending
* min_disptime -> first_pending_disptime

This patch is purely cosmetic.

Signed-off-by: Tejun Heo <tj@kernel.org
Acked-by: Vivek Goyal <vgoyal@redhat.com>

show more ...


12345678910>>...19