This patch short-circuits all the direct-io page dirtying logic for higher-order pages. Without this, we pointlessly bounce BIOs up to keventd all the time. 25-akpm/fs/bio.c | 12 +++++++++--- 1 files changed, 9 insertions(+), 3 deletions(-) diff -puN fs/bio.c~direct-io-skip-compound-pages fs/bio.c --- 25/fs/bio.c~direct-io-skip-compound-pages Thu Sep 18 14:22:09 2003 +++ 25-akpm/fs/bio.c Thu Sep 18 15:18:28 2003 @@ -532,6 +532,12 @@ void bio_unmap_user(struct bio *bio, int * check that the pages are still dirty. If so, fine. If not, redirty them * in process context. * + * We special-case compound pages here: normally this means reads into hugetlb + * pages. The logic in here doesn't really work right for compound pages + * because the VM does not uniformly chase down the head page in all cases. + * But dirtiness of compound pages is pretty meaningless anyway: the VM doesn't + * handle them at all. So we skip compound pages here at an early stage. + * * Note that this code is very hard to test under normal circumstances because * direct-io pins the pages with get_user_pages(). This makes * is_page_cache_freeable return false, and the VM will not clean the pages. @@ -553,8 +559,8 @@ void bio_set_pages_dirty(struct bio *bio for (i = 0; i < bio->bi_vcnt; i++) { struct page *page = bvec[i].bv_page; - if (page) - set_page_dirty_lock(bvec[i].bv_page); + if (page && !PageCompound(page)) + set_page_dirty_lock(page); } } @@ -620,7 +626,7 @@ void bio_check_pages_dirty(struct bio *b for (i = 0; i < bio->bi_vcnt; i++) { struct page *page = bvec[i].bv_page; - if (PageDirty(page)) { + if (PageDirty(page) || PageCompound(page)) { page_cache_release(page); bvec[i].bv_page = NULL; } else { _