Skip to content

Commit e73e81b

Browse files
Ethan Lienkdave
Ethan Lien
authored andcommitted
btrfs: balance dirty metadata pages in btrfs_finish_ordered_io
[Problem description and how we fix it] We should balance dirty metadata pages at the end of btrfs_finish_ordered_io, since a small, unmergeable random write can potentially produce dirty metadata which is multiple times larger than the data itself. For example, a small, unmergeable 4KiB write may produce: 16KiB dirty leaf (and possibly 16KiB dirty node) in subvolume tree 16KiB dirty leaf (and possibly 16KiB dirty node) in checksum tree 16KiB dirty leaf (and possibly 16KiB dirty node) in extent tree Although we do call balance dirty pages in write side, but in the buffered write path, most metadata are dirtied only after we reach the dirty background limit (which by far only counts dirty data pages) and wakeup the flusher thread. If there are many small, unmergeable random writes spread in a large btree, we'll find a burst of dirty pages exceeds the dirty_bytes limit after we wakeup the flusher thread - which is not what we expect. In our machine, it caused out-of-memory problem since a page cannot be dropped if it is marked dirty. Someone may worry about we may sleep in btrfs_btree_balance_dirty_nodelay, but since we do btrfs_finish_ordered_io in a separate worker, it will not stop the flusher consuming dirty pages. Also, we use different worker for metadata writeback endio, sleep in btrfs_finish_ordered_io help us throttle the size of dirty metadata pages. [Reproduce steps] To reproduce the problem, we need to do 4KiB write randomly spread in a large btree. In our 2GiB RAM machine: 1) Create 4 subvolumes. 2) Run fio on each subvolume: [global] direct=0 rw=randwrite ioengine=libaio bs=4k iodepth=16 numjobs=1 group_reporting size=128G runtime=1800 norandommap time_based randrepeat=0 3) Take snapshot on each subvolume and repeat fio on existing files. 4) Repeat step (3) until we get large btrees. In our case, by observing btrfs_root_item->bytes_used, we have 2GiB of metadata in each subvolume tree and 12GiB of metadata in extent tree. 5) Stop all fio, take snapshot again, and wait until all delayed work is completed. 6) Start all fio. Few seconds later we hit OOM when the flusher starts to work. It can be reproduced even when using nocow write. Signed-off-by: Ethan Lien <[email protected]> Reviewed-by: David Sterba <[email protected]> [ add comment ] Signed-off-by: David Sterba <[email protected]>
1 parent 78d4295 commit e73e81b

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

fs/btrfs/inode.c

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3163,6 +3163,9 @@ static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent)
31633163
/* once for the tree */
31643164
btrfs_put_ordered_extent(ordered_extent);
31653165

3166+
/* Try to release some metadata so we don't get an OOM but don't wait */
3167+
btrfs_btree_balance_dirty_nodelay(fs_info);
3168+
31663169
return ret;
31673170
}
31683171

0 commit comments

Comments
 (0)