[PATCH 0/6 v2] fs: Provide function to unmap metadata for a range of blocks

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[PATCH 0/6 v2] fs: Provide function to unmap metadata for a range of blocks

Jan Kara
Hello,

I've noticed that in several places we need to unmap metadata in buffer cache
for a range of blocks and we do it by iterating over all blocks in given range.
Let's provide a helper function for that and implement it in a way more
efficient for larger ranges of blocks. Also cleanup other uses of
unmap_unerlying_metadata().  The patches passed xfstests for ext2 and ext4.

Jens, can you merge these patches if they look fine to you?

Changes since v1:
* Improved comment describing what the function does
* Renamed function
* Added wrapper for single block users and use it

                                                                Honza

------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
_______________________________________________
Linux-NTFS-Dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/linux-ntfs-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[PATCH 1/6] fs: Provide function to unmap metadata for a range of blocks

Jan Kara
Provide function equivalent to unmap_underlying_metadata() for a range
of blocks. We somewhat optimize the function to use pagevec lookups
instead of looking up buffer heads one by one and use page lock to pin
buffer heads instead of mapping's private_lock to improve scalability.

Signed-off-by: Jan Kara <[hidden email]>
---
 fs/buffer.c                 | 76 +++++++++++++++++++++++++++++++++++++++++++++
 include/linux/buffer_head.h |  2 ++
 2 files changed, 78 insertions(+)

diff --git a/fs/buffer.c b/fs/buffer.c
index b205a629001d..05f30838cec3 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -43,6 +43,7 @@
 #include <linux/bitops.h>
 #include <linux/mpage.h>
 #include <linux/bit_spinlock.h>
+#include <linux/pagevec.h>
 #include <trace/events/block.h>
 
 static int fsync_buffers_list(spinlock_t *lock, struct list_head *list);
@@ -1636,6 +1637,81 @@ void unmap_underlying_metadata(struct block_device *bdev, sector_t block)
 }
 EXPORT_SYMBOL(unmap_underlying_metadata);
 
+/**
+ * clean_bdev_aliases: clean a range of buffers in block device
+ * @bdev: Block device to clean buffers in
+ * @block: Start of a range of blocks to clean
+ * @len: Number of blocks to clean
+ *
+ * We are taking a range of blocks for data and we don't want writeback of any
+ * buffer-cache aliases starting from return from this function and until the
+ * moment when something will explicitly mark the buffer dirty (hopefully that
+ * will not happen until we will free that block ;-) We don't even need to mark
+ * it not-uptodate - nobody can expect anything from a newly allocated buffer
+ * anyway. We used to used unmap_buffer() for such invalidation, but that was
+ * wrong. We definitely don't want to mark the alias unmapped, for example - it
+ * would confuse anyone who might pick it with bread() afterwards...
+ *
+ * Also..  Note that bforget() doesn't lock the buffer.  So there can be
+ * writeout I/O going on against recently-freed buffers.  We don't wait on that
+ * I/O in bforget() - it's more efficient to wait on the I/O only if we really
+ * need to.  That happens here.
+ */
+void clean_bdev_aliases(struct block_device *bdev, sector_t block, sector_t len)
+{
+ struct inode *bd_inode = bdev->bd_inode;
+ struct address_space *bd_mapping = bd_inode->i_mapping;
+ struct pagevec pvec;
+ pgoff_t index = block >> (PAGE_SHIFT - bd_inode->i_blkbits);
+ pgoff_t end;
+ int i;
+ struct buffer_head *bh;
+ struct buffer_head *head;
+
+ end = (block + len - 1) >> (PAGE_SHIFT - bd_inode->i_blkbits);
+ pagevec_init(&pvec, 0);
+ while (index <= end && pagevec_lookup(&pvec, bd_mapping, index,
+ min(end - index, (pgoff_t)PAGEVEC_SIZE - 1) + 1)) {
+ for (i = 0; i < pagevec_count(&pvec); i++) {
+ struct page *page = pvec.pages[i];
+
+ index = page->index;
+ if (index > end)
+ break;
+ if (!page_has_buffers(page))
+ continue;
+ /*
+ * We use page lock instead of bd_mapping->private_lock
+ * to pin buffers here since we can afford to sleep and
+ * it scales better than a global spinlock lock.
+ */
+ lock_page(page);
+ /* Recheck when the page is locked which pins bhs */
+ if (!page_has_buffers(page))
+ goto unlock_page;
+ head = page_buffers(page);
+ bh = head;
+ do {
+ if (!buffer_mapped(bh))
+ goto next;
+ if (bh->b_blocknr >= block + len)
+ break;
+ clear_buffer_dirty(bh);
+ wait_on_buffer(bh);
+ clear_buffer_req(bh);
+next:
+ bh = bh->b_this_page;
+ } while (bh != head);
+unlock_page:
+ unlock_page(page);
+ }
+ pagevec_release(&pvec);
+ cond_resched();
+ index++;
+ }
+}
+EXPORT_SYMBOL(clean_bdev_aliases);
+
 /*
  * Size is a power-of-two in the range 512..PAGE_SIZE,
  * and the case we care about most is PAGE_SIZE.
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index ebbacd14d450..9c9c73ce7d4f 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -169,6 +169,8 @@ void invalidate_inode_buffers(struct inode *);
 int remove_inode_buffers(struct inode *inode);
 int sync_mapping_buffers(struct address_space *mapping);
 void unmap_underlying_metadata(struct block_device *bdev, sector_t block);
+void clean_bdev_aliases(struct block_device *bdev, sector_t block,
+ sector_t len);
 
 void mark_buffer_async_write(struct buffer_head *bh);
 void __wait_on_buffer(struct buffer_head *);
--
2.6.6


------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
_______________________________________________
Linux-NTFS-Dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/linux-ntfs-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[PATCH 2/6] direct-io: Use clean_bdev_aliases() instead of handmade iteration

Jan Kara
In reply to this post by Jan Kara
Use new provided function instead of an iteration through all allocated
blocks.

Signed-off-by: Jan Kara <[hidden email]>
---
 fs/direct-io.c | 28 +++++++---------------------
 1 file changed, 7 insertions(+), 21 deletions(-)

diff --git a/fs/direct-io.c b/fs/direct-io.c
index fb9aa16a7727..12ac532a444a 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -843,24 +843,6 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
 }
 
 /*
- * Clean any dirty buffers in the blockdev mapping which alias newly-created
- * file blocks.  Only called for S_ISREG files - blockdevs do not set
- * buffer_new
- */
-static void clean_blockdev_aliases(struct dio *dio, struct buffer_head *map_bh)
-{
- unsigned i;
- unsigned nblocks;
-
- nblocks = map_bh->b_size >> dio->inode->i_blkbits;
-
- for (i = 0; i < nblocks; i++) {
- unmap_underlying_metadata(map_bh->b_bdev,
-  map_bh->b_blocknr + i);
- }
-}
-
-/*
  * If we are not writing the entire block and get_block() allocated
  * the block for us, we need to fill-in the unused portion of the
  * block with zeros. This happens only if user-buffer, fileoffset or
@@ -960,11 +942,15 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio,
  goto do_holes;
 
  sdio->blocks_available =
- map_bh->b_size >> sdio->blkbits;
+ map_bh->b_size >> blkbits;
  sdio->next_block_for_io =
  map_bh->b_blocknr << sdio->blkfactor;
- if (buffer_new(map_bh))
- clean_blockdev_aliases(dio, map_bh);
+ if (buffer_new(map_bh)) {
+ clean_bdev_aliases(
+ map_bh->b_bdev,
+ map_bh->b_blocknr,
+ map_bh->b_size >> blkbits);
+ }
 
  if (!sdio->blkfactor)
  goto do_holes;
--
2.6.6


------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
_______________________________________________
Linux-NTFS-Dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/linux-ntfs-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[PATCH 3/6] ext4: Use clean_bdev_aliases() instead of iteration

Jan Kara
In reply to this post by Jan Kara
Use clean_bdev_aliases() instead of iterating through blocks one by one.

Signed-off-by: Jan Kara <[hidden email]>
---
 fs/ext4/extents.c | 13 ++-----------
 fs/ext4/inode.c   | 15 ++++-----------
 2 files changed, 6 insertions(+), 22 deletions(-)

diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index c930a0110fb4..dd5b74dfa018 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -3777,14 +3777,6 @@ static int ext4_convert_unwritten_extents_endio(handle_t *handle,
  return err;
 }
 
-static void unmap_underlying_metadata_blocks(struct block_device *bdev,
- sector_t block, int count)
-{
- int i;
- for (i = 0; i < count; i++)
-                unmap_underlying_metadata(bdev, block + i);
-}
-
 /*
  * Handle EOFBLOCKS_FL flag, clearing it if necessary
  */
@@ -4121,9 +4113,8 @@ ext4_ext_handle_unwritten_extents(handle_t *handle, struct inode *inode,
  * new.
  */
  if (allocated > map->m_len) {
- unmap_underlying_metadata_blocks(inode->i_sb->s_bdev,
- newblock + map->m_len,
- allocated - map->m_len);
+ clean_bdev_aliases(inode->i_sb->s_bdev, newblock + map->m_len,
+   allocated - map->m_len);
  allocated = map->m_len;
  }
  map->m_len = allocated;
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 9c064727ed62..7c7cc4ae4b8e 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -654,12 +654,8 @@ int ext4_map_blocks(handle_t *handle, struct inode *inode,
  if (flags & EXT4_GET_BLOCKS_ZERO &&
     map->m_flags & EXT4_MAP_MAPPED &&
     map->m_flags & EXT4_MAP_NEW) {
- ext4_lblk_t i;
-
- for (i = 0; i < map->m_len; i++) {
- unmap_underlying_metadata(inode->i_sb->s_bdev,
-  map->m_pblk + i);
- }
+ clean_bdev_aliases(inode->i_sb->s_bdev, map->m_pblk,
+   map->m_len);
  ret = ext4_issue_zeroout(inode, map->m_lblk,
  map->m_pblk, map->m_len);
  if (ret) {
@@ -2360,11 +2356,8 @@ static int mpage_map_one_extent(handle_t *handle, struct mpage_da_data *mpd)
 
  BUG_ON(map->m_len == 0);
  if (map->m_flags & EXT4_MAP_NEW) {
- struct block_device *bdev = inode->i_sb->s_bdev;
- int i;
-
- for (i = 0; i < map->m_len; i++)
- unmap_underlying_metadata(bdev, map->m_pblk + i);
+ clean_bdev_aliases(inode->i_sb->s_bdev, map->m_pblk,
+   map->m_len);
  }
  return 0;
 }
--
2.6.6


------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
_______________________________________________
Linux-NTFS-Dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/linux-ntfs-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[PATCH 4/6] ext2: Use clean_bdev_aliases() instead of iteration

Jan Kara
In reply to this post by Jan Kara
Use clean_bdev_aliases() instead of iterating through blocks one by one.

Signed-off-by: Jan Kara <[hidden email]>
---
 fs/ext2/inode.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c
index d831e24dc885..eb11f7e2b8aa 100644
--- a/fs/ext2/inode.c
+++ b/fs/ext2/inode.c
@@ -732,16 +732,13 @@ static int ext2_get_blocks(struct inode *inode,
  }
 
  if (IS_DAX(inode)) {
- int i;
-
  /*
  * We must unmap blocks before zeroing so that writeback cannot
  * overwrite zeros with stale data from block device page cache.
  */
- for (i = 0; i < count; i++) {
- unmap_underlying_metadata(inode->i_sb->s_bdev,
- le32_to_cpu(chain[depth-1].key) + i);
- }
+ clean_bdev_aliases(inode->i_sb->s_bdev,
+   le32_to_cpu(chain[depth-1].key),
+   count);
  /*
  * block must be initialised before we put it in the tree
  * so that it's not found by another thread before it's
--
2.6.6


------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
_______________________________________________
Linux-NTFS-Dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/linux-ntfs-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[PATCH 5/6] fs: Add helper to clean bdev aliases under a bh and use it

Jan Kara
In reply to this post by Jan Kara
Add a helper function that clears buffer heads from a block device
aliasing passed bh. Use this helper function from filesystems instead of
the original unmap_underlying_metadata() to save some boiler plate code
and also have a better name for the functionalily since it is not
unmapping anything for a *long* time.

Signed-off-by: Jan Kara <[hidden email]>
---
 fs/buffer.c                 | 8 +++-----
 fs/ext4/inode.c             | 3 +--
 fs/ext4/page-io.c           | 2 +-
 fs/mpage.c                  | 3 +--
 fs/ntfs/aops.c              | 2 +-
 fs/ntfs/file.c              | 5 ++---
 fs/ocfs2/aops.c             | 2 +-
 fs/ufs/balloc.c             | 3 +--
 fs/ufs/inode.c              | 3 +--
 include/linux/buffer_head.h | 4 ++++
 10 files changed, 16 insertions(+), 19 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 05f30838cec3..f96c079e181d 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -1821,8 +1821,7 @@ int __block_write_full_page(struct inode *inode, struct page *page,
  if (buffer_new(bh)) {
  /* blockdev mappings never come here */
  clear_buffer_new(bh);
- unmap_underlying_metadata(bh->b_bdev,
- bh->b_blocknr);
+ clean_bdev_bh_alias(bh);
  }
  }
  bh = bh->b_this_page;
@@ -2068,8 +2067,7 @@ int __block_write_begin_int(struct page *page, loff_t pos, unsigned len,
  }
 
  if (buffer_new(bh)) {
- unmap_underlying_metadata(bh->b_bdev,
- bh->b_blocknr);
+ clean_bdev_bh_alias(bh);
  if (PageUptodate(page)) {
  clear_buffer_new(bh);
  set_buffer_uptodate(bh);
@@ -2709,7 +2707,7 @@ int nobh_write_begin(struct address_space *mapping,
  if (!buffer_mapped(bh))
  is_mapped_to_disk = 0;
  if (buffer_new(bh))
- unmap_underlying_metadata(bh->b_bdev, bh->b_blocknr);
+ clean_bdev_bh_alias(bh);
  if (PageUptodate(page)) {
  set_buffer_uptodate(bh);
  continue;
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 7c7cc4ae4b8e..2f8127601bef 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1123,8 +1123,7 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len,
  if (err)
  break;
  if (buffer_new(bh)) {
- unmap_underlying_metadata(bh->b_bdev,
-  bh->b_blocknr);
+ clean_bdev_bh_alias(bh);
  if (PageUptodate(page)) {
  clear_buffer_new(bh);
  set_buffer_uptodate(bh);
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index 0094923e5ebf..feed6a161e56 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -457,7 +457,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
  }
  if (buffer_new(bh)) {
  clear_buffer_new(bh);
- unmap_underlying_metadata(bh->b_bdev, bh->b_blocknr);
+ clean_bdev_bh_alias(bh);
  }
  set_buffer_async_write(bh);
  nr_to_submit++;
diff --git a/fs/mpage.c b/fs/mpage.c
index d2413af0823a..a15e0292a000 100644
--- a/fs/mpage.c
+++ b/fs/mpage.c
@@ -555,8 +555,7 @@ static int __mpage_writepage(struct page *page, struct writeback_control *wbc,
  if (mpd->get_block(inode, block_in_file, &map_bh, 1))
  goto confused;
  if (buffer_new(&map_bh))
- unmap_underlying_metadata(map_bh.b_bdev,
- map_bh.b_blocknr);
+ clean_bdev_bh_alias(&map_bh);
  if (buffer_boundary(&map_bh)) {
  boundary_block = map_bh.b_blocknr;
  boundary_bdev = map_bh.b_bdev;
diff --git a/fs/ntfs/aops.c b/fs/ntfs/aops.c
index fe251f187ff8..571d0f933080 100644
--- a/fs/ntfs/aops.c
+++ b/fs/ntfs/aops.c
@@ -764,7 +764,7 @@ static int ntfs_write_block(struct page *page, struct writeback_control *wbc)
  }
  // TODO: Instantiate the hole.
  // clear_buffer_new(bh);
- // unmap_underlying_metadata(bh->b_bdev, bh->b_blocknr);
+ // clean_bdev_bh_alias(bh);
  ntfs_error(vol->sb, "Writing into sparse regions is "
  "not supported yet. Sorry.");
  err = -EOPNOTSUPP;
diff --git a/fs/ntfs/file.c b/fs/ntfs/file.c
index bf72a2c58b75..99510d811a8c 100644
--- a/fs/ntfs/file.c
+++ b/fs/ntfs/file.c
@@ -740,8 +740,7 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages,
  set_buffer_uptodate(bh);
  if (unlikely(was_hole)) {
  /* We allocated the buffer. */
- unmap_underlying_metadata(bh->b_bdev,
- bh->b_blocknr);
+ clean_bdev_bh_alias(bh);
  if (bh_end <= pos || bh_pos >= end)
  mark_buffer_dirty(bh);
  else
@@ -784,7 +783,7 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages,
  continue;
  }
  /* We allocated the buffer. */
- unmap_underlying_metadata(bh->b_bdev, bh->b_blocknr);
+ clean_bdev_bh_alias(bh);
  /*
  * If the buffer is fully outside the write, zero it,
  * set it uptodate, and mark it dirty so it gets
diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
index c5c5b9748ea3..e8f65eefffca 100644
--- a/fs/ocfs2/aops.c
+++ b/fs/ocfs2/aops.c
@@ -630,7 +630,7 @@ int ocfs2_map_page_blocks(struct page *page, u64 *p_blkno,
 
  if (!buffer_mapped(bh)) {
  map_bh(bh, inode->i_sb, *p_blkno);
- unmap_underlying_metadata(bh->b_bdev, bh->b_blocknr);
+ clean_bdev_bh_alias(bh);
  }
 
  if (PageUptodate(page)) {
diff --git a/fs/ufs/balloc.c b/fs/ufs/balloc.c
index 67e085d591d8..92b4acd4b0aa 100644
--- a/fs/ufs/balloc.c
+++ b/fs/ufs/balloc.c
@@ -306,8 +306,7 @@ static void ufs_change_blocknr(struct inode *inode, sector_t beg,
      (unsigned long long)(pos + newb), pos);
 
  bh->b_blocknr = newb + pos;
- unmap_underlying_metadata(bh->b_bdev,
-  bh->b_blocknr);
+ clean_bdev_bh_alias(bh);
  mark_buffer_dirty(bh);
  ++j;
  bh = bh->b_this_page;
diff --git a/fs/ufs/inode.c b/fs/ufs/inode.c
index 190d64be22ed..45ceb94e89e4 100644
--- a/fs/ufs/inode.c
+++ b/fs/ufs/inode.c
@@ -1070,8 +1070,7 @@ static int ufs_alloc_lastblock(struct inode *inode, loff_t size)
 
        if (buffer_new(bh)) {
        clear_buffer_new(bh);
-       unmap_underlying_metadata(bh->b_bdev,
- bh->b_blocknr);
+       clean_bdev_bh_alias(bh);
        /*
  * we do not zeroize fragment, because of
  * if it maped to hole, it already contains zeroes
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index 9c9c73ce7d4f..d1ab91fc6d43 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -171,6 +171,10 @@ int sync_mapping_buffers(struct address_space *mapping);
 void unmap_underlying_metadata(struct block_device *bdev, sector_t block);
 void clean_bdev_aliases(struct block_device *bdev, sector_t block,
  sector_t len);
+static inline void clean_bdev_bh_alias(struct buffer_head *bh)
+{
+ clean_bdev_aliases(bh->b_bdev, bh->b_blocknr, 1);
+}
 
 void mark_buffer_async_write(struct buffer_head *bh);
 void __wait_on_buffer(struct buffer_head *);
--
2.6.6


------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
_______________________________________________
Linux-NTFS-Dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/linux-ntfs-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[PATCH 6/6] fs: Remove unmap_underlying_metadata

Jan Kara
In reply to this post by Jan Kara
Nobody is using this function anymore. Remove it.

Signed-off-by: Jan Kara <[hidden email]>
---
 fs/buffer.c                 | 32 --------------------------------
 include/linux/buffer_head.h |  1 -
 2 files changed, 33 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index f96c079e181d..6d6680c8d306 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -1605,38 +1605,6 @@ void create_empty_buffers(struct page *page,
 }
 EXPORT_SYMBOL(create_empty_buffers);
 
-/*
- * We are taking a block for data and we don't want any output from any
- * buffer-cache aliases starting from return from that function and
- * until the moment when something will explicitly mark the buffer
- * dirty (hopefully that will not happen until we will free that block ;-)
- * We don't even need to mark it not-uptodate - nobody can expect
- * anything from a newly allocated buffer anyway. We used to used
- * unmap_buffer() for such invalidation, but that was wrong. We definitely
- * don't want to mark the alias unmapped, for example - it would confuse
- * anyone who might pick it with bread() afterwards...
- *
- * Also..  Note that bforget() doesn't lock the buffer.  So there can
- * be writeout I/O going on against recently-freed buffers.  We don't
- * wait on that I/O in bforget() - it's more efficient to wait on the I/O
- * only if we really need to.  That happens here.
- */
-void unmap_underlying_metadata(struct block_device *bdev, sector_t block)
-{
- struct buffer_head *old_bh;
-
- might_sleep();
-
- old_bh = __find_get_block_slow(bdev, block);
- if (old_bh) {
- clear_buffer_dirty(old_bh);
- wait_on_buffer(old_bh);
- clear_buffer_req(old_bh);
- __brelse(old_bh);
- }
-}
-EXPORT_SYMBOL(unmap_underlying_metadata);
-
 /**
  * clean_bdev_aliases: clean a range of buffers in block device
  * @bdev: Block device to clean buffers in
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index d1ab91fc6d43..d67ab83823ad 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -168,7 +168,6 @@ int inode_has_buffers(struct inode *);
 void invalidate_inode_buffers(struct inode *);
 int remove_inode_buffers(struct inode *inode);
 int sync_mapping_buffers(struct address_space *mapping);
-void unmap_underlying_metadata(struct block_device *bdev, sector_t block);
 void clean_bdev_aliases(struct block_device *bdev, sector_t block,
  sector_t len);
 static inline void clean_bdev_bh_alias(struct buffer_head *bh)
--
2.6.6


------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
_______________________________________________
Linux-NTFS-Dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/linux-ntfs-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PATCH 0/6 v2] fs: Provide function to unmap metadata for a range of blocks

Jens Axboe
In reply to this post by Jan Kara
On Fri, Nov 04 2016, Jan Kara wrote:
> Hello,
>
> I've noticed that in several places we need to unmap metadata in buffer cache
> for a range of blocks and we do it by iterating over all blocks in given range.
> Let's provide a helper function for that and implement it in a way more
> efficient for larger ranges of blocks. Also cleanup other uses of
> unmap_unerlying_metadata().  The patches passed xfstests for ext2 and ext4.
>
> Jens, can you merge these patches if they look fine to you?

Looks clean to me. There's a small typo ("used to used") in the comment
for patch #1, but I'll just correct that manually before applying.

--
Jens Axboe


------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
_______________________________________________
Linux-NTFS-Dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/linux-ntfs-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[lkp] [ext4] adad5aa544: fio.write_bw_MBps +4074.4% improvement

kernel test robot
In reply to this post by Jan Kara

Greeting,

FYI, we noticed a +4074.4% improvement of fio.write_bw_MBps due to commit:


commit adad5aa544e281d84f837b2786809611cb35a999 ("ext4: Use clean_bdev_aliases() instead of iteration")
https://github.com/0day-ci/linux Jan-Kara/fs-Provide-function-to-unmap-metadata-for-a-range-of-blocks/20161105-030924

in testcase: fio-basic
on test machine: 56 threads Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz with 256G memory
with following parameters:

        disk: 2pmem
        fs: ext4
        runtime: 200s
        nr_task: 50%
        time_based: tb
        rw: randwrite
        bs: 4k
        ioengine: libaio
        test_size: 200G
        cpufreq_governor: performance

Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.

In addition to that, the commit also has significant impact on the following tests:

+------------------+-----------------------------------------------------------------------+
| testcase: change | fio-basic:  fio.write_bw_MBps +3928.3% improvement   |
| test machine     | 56 threads Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz with 256G memory |
| test parameters  | bs=4k                                                                 |
|                  | cpufreq_governor=performance                                          |
|                  | disk=2pmem                                                            |
|                  | fs=ext4                                                               |
|                  | ioengine=sync                                                         |
|                  | nr_task=50%                                                           |
|                  | runtime=200s                                                          |
|                  | rw=randwrite                                                          |
|                  | test_size=200G                                                        |
|                  | time_based=tb                                                         |
+------------------+-----------------------------------------------------------------------+
| testcase: change | fio-basic: fio.write_bw_MBps +88.0% improvement                   |
| test machine     | 16 threads Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz with 8G memory       |
| test parameters  | bs=4k                                                                 |
|                  | cpufreq_governor=performance                                          |
|                  | disk=1SSD                                                             |
|                  | fs=ext4                                                               |
|                  | ioengine=sync                                                         |
|                  | nr_task=64                                                            |
|                  | runtime=300s                                                          |
|                  | rw=randwrite                                                          |
|                  | test_size=400g                                                        |
+------------------+-----------------------------------------------------------------------+
| testcase: change | trinity:                                                              |
| test machine     | qemu-system-x86_64 -enable-kvm -cpu IvyBridge -m 360M                 |
| test parameters  | runtime=300s                                                          |
+------------------+-----------------------------------------------------------------------+


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.

Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml

=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based:
  4k/gcc-6/performance/2pmem/ext4/libaio/x86_64-rhel-7.2/50%/debian-x86_64-2016-08-31.cgz/200s/randwrite/lkp-hsw-ep6/200G/fio-basic/tb

commit:
  6f2b562c3a ("direct-io: Use clean_bdev_aliases() instead of handmade iteration")
  adad5aa544 ("ext4: Use clean_bdev_aliases() instead of iteration")

6f2b562c3a89f4a6 adad5aa544e281d84f837b2786
---------------- --------------------------
       fail:runs  %reproduction    fail:runs
           |             |             |    
         %stddev     %change         %stddev
             \          |                \  
     64.45 ±  0%   +4074.4%       2690 ±  2%  fio.write_bw_MBps
      0.05 ± 52%    -100.0%       0.00 ± -1%  fio.latency_1000ms%
      0.01 ±  0%   +1075.0%       0.12 ± 58%  fio.latency_1000us%
      3.41 ±  0%     -97.3%       0.09 ±  8%  fio.latency_10ms%
      0.42 ±  4%     -70.1%       0.12 ±  6%  fio.latency_20ms%
     23.23 ±  1%    -100.0%       0.01 ±  0%  fio.latency_250ms%
     14.50 ±  3%     -82.6%       2.52 ± 24%  fio.latency_250us%
      0.06 ± 20%     +86.4%       0.10 ±  4%  fio.latency_2ms%
      0.01 ±  0%    +100.0%       0.02 ±  0%  fio.latency_4ms%
      1.95 ±  6%     -99.6%       0.01 ± 57%  fio.latency_500ms%
     53.08 ±  2%     +52.0%      80.68 ±  3%  fio.latency_500us%
      0.14 ± 13%   +1492.9%       2.23 ±  3%  fio.latency_50ms%
      1.05 ± 13%    -100.0%       0.00 ± -1%  fio.latency_750ms%
      2.12 ± 52%    +560.9%      14.03 ± 16%  fio.latency_750us%
  26414526 ±  0%   +4069.9%  1.101e+09 ±  2%  fio.time.file_system_outputs
    673.00 ±  3%    +915.8%       6836 ± 12%  fio.time.involuntary_context_switches
     35917 ±  2%    +103.7%      73176 ± 11%  fio.time.minor_page_faults
     21.25 ±  3%   +4110.6%     894.75 ±  1%  fio.time.percent_of_cpu_this_job_got
     29.64 ±  3%   +5107.6%       1543 ±  1%  fio.time.system_time
     13.86 ±  2%   +1755.6%     257.23 ±  1%  fio.time.user_time
     58627 ±  0%    +808.6%     532701 ±  0%  fio.time.voluntary_context_switches
    166912 ±  2%     -99.6%     586.00 ±  2%  fio.write_clat_90%_us
    207872 ±  0%     -99.7%     638.00 ±  2%  fio.write_clat_95%_us
    521216 ± 20%     -92.8%      37632 ±  1%  fio.write_clat_99%_us
     52577 ±  0%     -97.6%       1261 ±  2%  fio.write_clat_mean_us
    105001 ±  2%     -94.6%       5676 ±  1%  fio.write_clat_stddev
     16498 ±  0%   +4074.4%     688706 ±  2%  fio.write_iops
      1693 ±  0%     -97.7%      38.82 ±  2%  fio.write_slat_mean_us
     17054 ±  0%     -94.0%       1021 ±  1%  fio.write_slat_stddev
     53581 ±  7%    +272.4%     199560 ±  3%  softirqs.RCU
    158867 ±  1%     +69.5%     269287 ±  0%  softirqs.SCHED
    565968 ±  0%    +175.8%    1560998 ±  1%  softirqs.TIMER
   3629667 ±  9%    +237.9%   12265065 ±  6%  cpuidle.C1-HSW.time
     66590 ± 12%    +689.8%     525953 ±  2%  cpuidle.C1-HSW.usage
 4.193e+09 ± 11%     -29.6%  2.952e+09 ±  3%  cpuidle.C3-HSW.time
   4346531 ± 11%     -28.6%    3101279 ±  3%  cpuidle.C3-HSW.usage
      2467 ± 22%    +434.5%      13186 ±  2%  cpuidle.POLL.usage
   1321503 ± 47%   +4167.7%   56398032 ± 19%  numa-numastat.node0.local_node
    542888 ± 99%   +2637.3%   14860275 ± 50%  numa-numastat.node0.numa_foreign
   1321513 ± 47%   +4167.7%   56398044 ± 19%  numa-numastat.node0.numa_hit
    592541 ± 99%   +1833.8%   11458393 ± 54%  numa-numastat.node0.numa_miss
   1425379 ± 39%   +3705.7%   54245725 ± 29%  numa-numastat.node1.local_node
    590754 ± 99%   +1840.5%   11463742 ± 55%  numa-numastat.node1.numa_foreign
   1425389 ± 39%   +3705.7%   54245733 ± 29%  numa-numastat.node1.numa_hit
    541101 ± 99%   +2647.3%   14865705 ± 50%  numa-numastat.node1.numa_miss
     66807 ±  0%   +5149.0%    3506733 ±  1%  vmstat.io.bo
     19111 ±  1%    +906.8%     192416 ±  0%  vmstat.memory.buff
  12571099 ±  0%    +231.6%   41679612 ±  0%  vmstat.memory.cache
  32807369 ±  0%     -89.3%    3505038 ±  0%  vmstat.memory.free
     27.00 ±  0%     -33.3%      18.00 ±  0%  vmstat.procs.b
      2.00 ±  0%    +475.0%      11.50 ±  4%  vmstat.procs.r
      2235 ±  4%    +414.3%      11499 ±  3%  vmstat.system.cs
     57912 ±  0%      +1.1%      58522 ±  0%  vmstat.system.in
      7.67 ±  1%    +219.5%      24.51 ±  1%  turbostat.%Busy
    160.00 ±  1%    +132.2%     371.50 ±  1%  turbostat.Avg_MHz
      2084 ±  1%     -27.3%       1515 ±  0%  turbostat.Bzy_MHz
     30.71 ±  4%     +10.2%      33.84 ±  1%  turbostat.CPU%c1
     37.92 ± 15%     -32.6%      25.57 ±  9%  turbostat.CPU%c3
     68.50 ±  3%      -6.9%      63.75 ±  3%  turbostat.CoreTmp
      4.84 ± 11%     +69.0%       8.19 ± 21%  turbostat.PKG_%
     20.77 ± 19%     -90.5%       1.97 ± 47%  turbostat.Pkg%pc2
    103.88 ±  1%     +12.3%     116.62 ±  0%  turbostat.PkgWatt
     81.35 ±  0%     +45.5%     118.40 ±  0%  turbostat.RAMWatt
    162447 ±  0%    +131.8%     376599 ±  0%  meminfo.Active
     58081 ±  1%    +364.5%     269806 ±  1%  meminfo.Active(file)
     19012 ±  1%    +911.6%     192331 ±  0%  meminfo.Buffers
  11732139 ±  0%    +240.7%   39971418 ±  0%  meminfo.Cached
    202839 ±  0%     -78.8%      43021 ±  3%  meminfo.CmaFree
   8269379 ±  0%     -14.8%    7041826 ±  0%  meminfo.Dirty
  11689070 ±  0%    +236.3%   39313737 ±  0%  meminfo.Inactive
  11567135 ±  0%    +238.8%   39191794 ±  0%  meminfo.Inactive(file)
  32825801 ±  0%     -89.3%    3516995 ±  0%  meminfo.MemFree
    820135 ±  0%    +106.8%    1695987 ±  0%  meminfo.SReclaimable
    920315 ±  0%     +95.2%    1796868 ±  0%  meminfo.Slab
    659.25 ±100%  +86807.4%     572937 ±  0%  meminfo.Unevictable
   4075428 ±  5%     -14.0%    3506877 ±  8%  numa-meminfo.node0.Dirty
   5833539 ±  6%    +245.8%   20174456 ±  0%  numa-meminfo.node0.FilePages
   5797730 ±  6%    +242.3%   19845628 ±  0%  numa-meminfo.node0.Inactive
   5708833 ±  5%    +245.6%   19726898 ±  0%  numa-meminfo.node0.Inactive(file)
  16344904 ±  4%     -88.7%    1841535 ±  1%  numa-meminfo.node0.MemFree
   8261907 ±  8%    +175.5%   22765276 ±  0%  numa-meminfo.node0.MemUsed
    381.50 ±100%  +75068.0%     286766 ±  0%  numa-meminfo.node0.Unevictable
     76111 ± 29%    +267.1%     279386 ± 12%  numa-meminfo.node1.Active
     24102 ± 30%    +862.9%     232089 ±  5%  numa-meminfo.node1.Active(file)
   4193777 ±  5%     -15.8%    3532488 ±  8%  numa-meminfo.node1.Dirty
   5920446 ±  5%    +237.7%   19992694 ±  0%  numa-meminfo.node1.FilePages
   5894203 ±  5%    +230.3%   19470961 ±  0%  numa-meminfo.node1.Inactive
   5861163 ±  4%    +232.1%   19467743 ±  0%  numa-meminfo.node1.Inactive(file)
  16477881 ±  4%     -89.9%    1671879 ±  2%  numa-meminfo.node1.MemFree
   8272729 ±  8%    +179.0%   23078732 ±  0%  numa-meminfo.node1.MemUsed
    412948 ± 82%    +176.4%    1141200 ±  6%  numa-meminfo.node1.SReclaimable
    457998 ± 73%    +159.9%    1190119 ±  6%  numa-meminfo.node1.Slab
    277.50 ±100%    +1e+05%     286552 ±  0%  numa-meminfo.node1.Unevictable
    946.00 ±  9%     +31.9%       1248 ±  5%  slabinfo.Acpi-ParseExt.active_objs
      1003 ±  7%     +28.7%       1291 ±  4%  slabinfo.Acpi-ParseExt.num_objs
   2728626 ±  0%    +260.1%    9826661 ±  0%  slabinfo.buffer_head.active_objs
     69964 ±  0%    +260.4%     252129 ±  0%  slabinfo.buffer_head.active_slabs
   2728626 ±  0%    +260.4%    9833052 ±  0%  slabinfo.buffer_head.num_objs
     69964 ±  0%    +260.4%     252129 ±  0%  slabinfo.buffer_head.num_slabs
    149.00 ± 37%    +642.8%       1106 ±  3%  slabinfo.dquot.active_objs
    149.00 ± 37%    +642.8%       1106 ±  3%  slabinfo.dquot.num_objs
    915961 ±  1%    +163.7%    2415651 ±  0%  slabinfo.ext4_extent_status.active_objs
      8979 ±  1%    +358.0%      41130 ±  1%  slabinfo.ext4_extent_status.active_slabs
    915961 ±  1%    +358.0%    4195339 ±  1%  slabinfo.ext4_extent_status.num_objs
      8979 ±  1%    +358.0%      41130 ±  1%  slabinfo.ext4_extent_status.num_slabs
    126.00 ±  0%    +237.9%     425.75 ± 15%  slabinfo.ext4_io_end.active_objs
    126.00 ±  0%    +237.9%     425.75 ± 15%  slabinfo.ext4_io_end.num_objs
      3350 ± 11%     +32.3%       4432 ±  2%  slabinfo.jbd2_journal_handle.active_objs
      3350 ± 11%     +32.3%       4432 ±  2%  slabinfo.jbd2_journal_handle.num_objs
      1818 ±  0%   +1433.8%      27884 ±  2%  slabinfo.jbd2_journal_head.active_objs
     67.50 ±  1%   +1210.4%     884.50 ±  3%  slabinfo.jbd2_journal_head.active_slabs
      2315 ±  1%   +1199.4%      30090 ±  3%  slabinfo.jbd2_journal_head.num_objs
     67.50 ±  1%   +1210.4%     884.50 ±  3%  slabinfo.jbd2_journal_head.num_slabs
      1722 ±  3%     +11.7%       1923 ±  4%  slabinfo.mnt_cache.active_objs
      1722 ±  3%     +11.7%       1923 ±  4%  slabinfo.mnt_cache.num_objs
 8.187e+11 ±  5%     -21.5%  6.428e+11 ±  2%  perf-stat.branch-instructions
      0.06 ± 13%   +2272.9%       1.46 ±  1%  perf-stat.branch-miss-rate%
 5.033e+08 ± 15%   +1758.8%  9.356e+09 ±  3%  perf-stat.branch-misses
      3.46 ±  2%    +746.3%      29.25 ±  1%  perf-stat.cache-miss-rate%
 1.366e+08 ±  8%   +7828.0%  1.083e+10 ±  3%  perf-stat.cache-misses
 3.951e+09 ±  7%    +837.4%  3.704e+10 ±  2%  perf-stat.cache-references
    447606 ±  4%    +420.9%    2331472 ±  3%  perf-stat.context-switches
  1.91e+12 ±  3%    +131.8%  4.427e+12 ±  2%  perf-stat.cpu-cycles
      7857 ±  6%    +196.2%      23270 ±  9%  perf-stat.cpu-migrations
      0.03 ±  7%   +3471.0%       1.23 ±  6%  perf-stat.dTLB-load-miss-rate%
 4.077e+08 ±  5%   +2913.0%  1.228e+10 ±  7%  perf-stat.dTLB-load-misses
 1.195e+12 ± 12%     -17.4%  9.878e+11 ±  4%  perf-stat.dTLB-loads
      0.01 ±  8%   +1635.3%       0.13 ± 21%  perf-stat.dTLB-store-miss-rate%
  76890357 ±  3%    +976.0%  8.273e+08 ± 23%  perf-stat.dTLB-store-misses
 1.067e+12 ± 11%     -38.9%  6.521e+11 ±  5%  perf-stat.dTLB-stores
     57.00 ±  1%     -38.6%      35.00 ±  2%  perf-stat.iTLB-load-miss-rate%
  80424911 ±  1%     -15.4%   68057203 ±  2%  perf-stat.iTLB-load-misses
  60673587 ±  2%    +108.3%  1.264e+08 ±  2%  perf-stat.iTLB-loads
 4.486e+12 ±  5%     -20.6%  3.563e+12 ±  2%  perf-stat.instructions
     55749 ±  3%      -6.1%      52369 ±  1%  perf-stat.instructions-per-iTLB-miss
      2.35 ±  2%     -65.7%       0.81 ±  1%  perf-stat.ipc
    420593 ±  0%      +9.3%     459778 ±  1%  perf-stat.minor-faults
  48205740 ±  7%   +9150.7%  4.459e+09 ± 11%  perf-stat.node-load-misses
  47211353 ±  3%   +9255.4%  4.417e+09 ± 17%  perf-stat.node-loads
  16243571 ±  8%   +2951.8%  4.957e+08 ± 10%  perf-stat.node-store-misses
  19185688 ±  4%   +2840.8%  5.642e+08 ±  4%  perf-stat.node-stores
    420632 ±  0%      +9.3%     459778 ±  1%  perf-stat.page-faults
     14521 ±  1%    +364.6%      67463 ±  1%  proc-vmstat.nr_active_file
   3310726 ±  0%   +4073.0%  1.382e+08 ±  2%  proc-vmstat.nr_dirtied
   2067310 ±  0%     -14.9%    1760148 ±  0%  proc-vmstat.nr_dirty
   2938080 ±  0%    +241.8%   10041069 ±  0%  proc-vmstat.nr_file_pages
     50709 ±  0%     -78.9%      10677 ±  4%  proc-vmstat.nr_free_cma
   8206148 ±  0%     -89.3%     879094 ±  0%  proc-vmstat.nr_free_pages
   2892082 ±  0%    +238.8%    9798085 ±  0%  proc-vmstat.nr_inactive_file
    205049 ±  0%    +106.8%     423996 ±  0%  proc-vmstat.nr_slab_reclaimable
    164.75 ±100%  +86842.0%     143237 ±  0%  proc-vmstat.nr_unevictable
   1377940 ±  1%   +9813.4%  1.366e+08 ±  2%  proc-vmstat.nr_written
     14521 ±  1%    +364.6%      67463 ±  1%  proc-vmstat.nr_zone_active_file
   2892082 ±  0%    +238.8%    9798122 ±  0%  proc-vmstat.nr_zone_inactive_file
    164.75 ±100%  +86842.0%     143237 ±  0%  proc-vmstat.nr_zone_unevictable
   2067310 ±  0%     -14.9%    1760151 ±  0%  proc-vmstat.nr_zone_write_pending
   1133626 ±  6%   +2222.1%   26324018 ± 13%  proc-vmstat.numa_foreign
    795.25 ± 82%   +4364.9%      35507 ± 25%  proc-vmstat.numa_hint_faults
    493.75 ± 76%   +5956.4%      29903 ± 29%  proc-vmstat.numa_hint_faults_local
   2748483 ±  3%   +3925.7%  1.106e+08 ±  5%  proc-vmstat.numa_hit
   2748463 ±  3%   +3925.8%  1.106e+08 ±  5%  proc-vmstat.numa_local
   1133626 ±  6%   +2222.1%   26324092 ± 13%  proc-vmstat.numa_miss
    295.00 ± 93%    +578.1%       2000 ± 21%  proc-vmstat.numa_pages_migrated
      2372 ± 77%   +1632.1%      41094 ± 20%  proc-vmstat.numa_pte_updates
      2891 ±  7%     +43.7%       4155 ± 11%  proc-vmstat.pgactivate
      0.00 ±  0%      +Inf%    5413142 ±  5%  proc-vmstat.pgalloc_dma32
   4013920 ±  0%   +3180.9%  1.317e+08 ±  3%  proc-vmstat.pgalloc_normal
    444351 ±  0%  +28262.2%   1.26e+08 ±  3%  proc-vmstat.pgfree
    295.00 ± 93%    +580.8%       2008 ± 21%  proc-vmstat.pgmigrate_success
  13537222 ±  0%   +5160.6%  7.121e+08 ±  2%  proc-vmstat.pgpgout
    242.50 ±100%  +70361.6%     170869 ±  0%  proc-vmstat.unevictable_pgs_culled
   1396319 ±  5%   +1827.5%   26914501 ±  6%  numa-vmstat.node0.nr_dirtied
   1018825 ±  5%     -14.0%     876621 ±  8%  numa-vmstat.node0.nr_dirty
   1458483 ±  6%    +245.8%    5043691 ±  0%  numa-vmstat.node0.nr_file_pages
   4086118 ±  4%     -88.7%     460244 ±  1%  numa-vmstat.node0.nr_free_pages
   1427304 ±  5%    +245.5%    4931802 ±  0%  numa-vmstat.node0.nr_inactive_file
     95.25 ±100%  +75177.2%      71701 ±  0%  numa-vmstat.node0.nr_unevictable
    377494 ±  7%   +6797.6%   26037879 ±  6%  numa-vmstat.node0.nr_written
   1427304 ±  5%    +245.5%    4931851 ±  0%  numa-vmstat.node0.nr_zone_inactive_file
     95.25 ±100%  +75177.2%      71701 ±  0%  numa-vmstat.node0.nr_zone_unevictable
   1018825 ±  5%     -14.0%     876636 ±  8%  numa-vmstat.node0.nr_zone_write_pending
    613506 ± 82%    +938.4%    6370772 ± 55%  numa-vmstat.node0.numa_foreign
   1209554 ± 44%   +1743.8%   22301956 ± 17%  numa-vmstat.node0.numa_hit
   1209543 ± 44%   +1743.8%   22301941 ± 17%  numa-vmstat.node0.numa_local
    622639 ± 83%    +717.3%    5088961 ± 41%  numa-vmstat.node0.numa_miss
      6025 ± 30%    +863.1%      58033 ±  5%  numa-vmstat.node1.nr_active_file
   1399379 ±  4%   +1908.6%   28107434 ± 12%  numa-vmstat.node1.nr_dirtied
   1048423 ±  5%     -15.8%     882992 ±  8%  numa-vmstat.node1.nr_dirty
   1480223 ±  5%    +237.7%    4998217 ±  0%  numa-vmstat.node1.nr_file_pages
     50710 ±  0%     -78.7%      10798 ±  4%  numa-vmstat.node1.nr_free_cma
   4119342 ±  4%     -89.9%     417911 ±  2%  numa-vmstat.node1.nr_free_pages
   1465406 ±  4%    +232.1%    4866978 ±  0%  numa-vmstat.node1.nr_inactive_file
    103241 ± 82%    +176.3%     285301 ±  6%  numa-vmstat.node1.nr_slab_reclaimable
     69.00 ±100%    +1e+05%      71643 ±  0%  numa-vmstat.node1.nr_unevictable
    350955 ±  7%   +7657.2%   27224439 ± 12%  numa-vmstat.node1.nr_written
      6025 ± 30%    +863.1%      58033 ±  5%  numa-vmstat.node1.nr_zone_active_file
   1465406 ±  4%    +232.1%    4866977 ±  0%  numa-vmstat.node1.nr_zone_inactive_file
     69.00 ±100%    +1e+05%      71643 ±  0%  numa-vmstat.node1.nr_zone_unevictable
   1048423 ±  5%     -15.8%     882992 ±  8%  numa-vmstat.node1.nr_zone_write_pending
    537217 ± 96%    +837.7%    5037423 ± 42%  numa-vmstat.node1.numa_foreign
   1283021 ± 41%   +1606.3%   21892238 ± 32%  numa-vmstat.node1.numa_hit
   1283007 ± 41%   +1606.3%   21892229 ± 32%  numa-vmstat.node1.numa_local
    528083 ± 96%   +1096.6%    6319275 ± 55%  numa-vmstat.node1.numa_miss
      3799 ±  0%    +311.7%      15643 ±  1%  sched_debug.cfs_rq:/.exec_clock.avg
     88813 ±  0%     -31.8%      60546 ± 15%  sched_debug.cfs_rq:/.exec_clock.max
    135.94 ± 11%   +1748.1%       2512 ± 42%  sched_debug.cfs_rq:/.exec_clock.min
     16353 ±  0%     -33.8%      10824 ± 12%  sched_debug.cfs_rq:/.exec_clock.stddev
     45073 ± 10%    +356.5%     205780 ± 13%  sched_debug.cfs_rq:/.load.avg
    173873 ± 11%     +91.4%     332795 ±  3%  sched_debug.cfs_rq:/.load.stddev
     31.39 ±  1%    +525.1%     196.23 ±  9%  sched_debug.cfs_rq:/.load_avg.avg
    144.71 ±  0%     +58.6%     229.45 ± 12%  sched_debug.cfs_rq:/.load_avg.stddev
     20203 ±  7%     +47.9%      29889 ±  9%  sched_debug.cfs_rq:/.min_vruntime.avg
    110963 ±  2%     -31.0%      76608 ± 13%  sched_debug.cfs_rq:/.min_vruntime.max
     17352 ±  2%     -35.4%      11201 ± 11%  sched_debug.cfs_rq:/.min_vruntime.stddev
      0.09 ±  8%    +181.7%       0.26 ±  7%  sched_debug.cfs_rq:/.nr_running.avg
      0.28 ±  2%     +50.6%       0.43 ±  2%  sched_debug.cfs_rq:/.nr_running.stddev
     29.50 ±  1%    +246.8%     102.30 ± 24%  sched_debug.cfs_rq:/.runnable_load_avg.avg
    141.43 ±  0%     +44.7%     204.62 ±  9%  sched_debug.cfs_rq:/.runnable_load_avg.stddev
     17352 ±  2%     -35.4%      11202 ± 11%  sched_debug.cfs_rq:/.spread0.stddev
    188.45 ±  3%     +92.8%     363.35 ±  4%  sched_debug.cfs_rq:/.util_avg.avg
    158.51 ±  1%     +45.3%     230.30 ±  7%  sched_debug.cfs_rq:/.util_avg.stddev
     99800 ±  0%     -27.8%      72076 ± 19%  sched_debug.cpu.avg_idle.min
    182448 ±  3%     +13.6%     207291 ±  7%  sched_debug.cpu.avg_idle.stddev
      6.26 ±  8%    +158.0%      16.16 ±  3%  sched_debug.cpu.clock.stddev
      6.26 ±  8%    +158.0%      16.16 ±  3%  sched_debug.cpu.clock_task.stddev
     29.01 ±  1%    +240.9%      98.90 ± 22%  sched_debug.cpu.cpu_load[0].avg
    141.19 ±  0%     +44.3%     203.80 ± 12%  sched_debug.cpu.cpu_load[0].stddev
     29.56 ±  1%    +350.4%     133.15 ± 19%  sched_debug.cpu.cpu_load[1].avg
    142.04 ±  0%     +49.6%     212.45 ±  9%  sched_debug.cpu.cpu_load[1].stddev
     29.26 ±  1%    +342.0%     129.30 ± 19%  sched_debug.cpu.cpu_load[2].avg
    141.48 ±  0%     +44.3%     204.21 ±  9%  sched_debug.cpu.cpu_load[2].stddev
     28.93 ±  1%    +327.8%     123.79 ± 18%  sched_debug.cpu.cpu_load[3].avg
    141.35 ±  0%     +38.0%     195.12 ± 10%  sched_debug.cpu.cpu_load[3].stddev
     28.53 ±  1%    +312.9%     117.79 ± 19%  sched_debug.cpu.cpu_load[4].avg
    141.39 ±  0%     +30.9%     185.09 ± 10%  sched_debug.cpu.cpu_load[4].stddev
    182.95 ± 10%    +176.1%     505.11 ±  9%  sched_debug.cpu.curr->pid.avg
    709.20 ±  5%     +32.1%     937.00 ±  1%  sched_debug.cpu.curr->pid.stddev
     44133 ± 11%    +374.4%     209378 ± 14%  sched_debug.cpu.load.avg
    171770 ± 11%     +94.3%     333803 ±  4%  sched_debug.cpu.load.stddev
     33181 ±  0%     +19.9%      39776 ±  0%  sched_debug.cpu.nr_load_updates.avg
     98035 ±  0%     -23.4%      75114 ±  9%  sched_debug.cpu.nr_load_updates.max
     10011 ±  4%    +100.8%      20108 ± 14%  sched_debug.cpu.nr_load_updates.min
     12893 ±  1%     -32.8%       8664 ± 12%  sched_debug.cpu.nr_load_updates.stddev
      0.09 ± 11%    +184.3%       0.26 ±  9%  sched_debug.cpu.nr_running.avg
      0.28 ±  4%     +51.2%       0.43 ±  2%  sched_debug.cpu.nr_running.stddev
      5765 ±  2%    +317.3%      24062 ±  2%  sched_debug.cpu.nr_switches.avg
     26550 ± 39%    +581.1%     180832 ± 16%  sched_debug.cpu.nr_switches.max
      1909 ±  5%    +151.0%       4791 ± 16%  sched_debug.cpu.nr_switches.min
      4347 ± 23%    +554.5%      28453 ± 13%  sched_debug.cpu.nr_switches.stddev
      0.39 ±  0%     -35.7%       0.25 ± 12%  sched_debug.cpu.nr_uninterruptible.avg
     10.88 ± 26%    +160.9%      28.38 ± 22%  sched_debug.cpu.nr_uninterruptible.max
    -12.44 ±-15%    +120.1%     -27.38 ±-29%  sched_debug.cpu.nr_uninterruptible.min
      4.40 ± 22%    +192.3%      12.85 ± 26%  sched_debug.cpu.nr_uninterruptible.stddev
      3718 ±  3%    +490.5%      21955 ±  2%  sched_debug.cpu.sched_count.avg
     23269 ± 44%    +665.3%     178092 ± 16%  sched_debug.cpu.sched_count.max
    170.69 ± 30%   +1730.8%       3125 ± 25%  sched_debug.cpu.sched_count.min
      4022 ± 27%    +603.4%      28294 ± 13%  sched_debug.cpu.sched_count.stddev
      1764 ±  4%    +509.7%      10757 ±  2%  sched_debug.cpu.sched_goidle.avg
     11576 ± 44%    +666.5%      88735 ± 16%  sched_debug.cpu.sched_goidle.max
     51.69 ± 28%   +2705.3%       1450 ± 24%  sched_debug.cpu.sched_goidle.min
      2023 ± 26%    +598.8%      14140 ± 13%  sched_debug.cpu.sched_goidle.stddev
      1782 ±  4%    +512.4%      10915 ±  2%  sched_debug.cpu.ttwu_count.avg
     15347 ± 35%    +467.3%      87069 ± 18%  sched_debug.cpu.ttwu_count.max
     76.25 ± 47%   +1728.4%       1394 ± 26%  sched_debug.cpu.ttwu_count.min
      2407 ± 23%    +484.3%      14068 ± 17%  sched_debug.cpu.ttwu_count.stddev
      1010 ±  0%     +75.5%       1772 ±  1%  sched_debug.cpu.ttwu_local.avg
      6785 ± 20%     -35.4%       4380 ± 12%  sched_debug.cpu.ttwu_local.max
     36.69 ± 18%    +959.8%     388.81 ± 26%  sched_debug.cpu.ttwu_local.min
      1133 ± 15%     -32.6%     764.11 ± 10%  sched_debug.cpu.ttwu_local.stddev
      2.70 ±  3%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.___might_sleep.__might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
      1.21 ±  9%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.___might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
      0.00 ± -1%      +Inf%       1.38 ±  7%  perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page.grab_cache_page_write_begin
      0.00 ± -1%      +Inf%       2.16 ±  4%  perf-profile.calltrace.cycles-pp.__block_commit_write.block_write_end.generic_write_end.ext4_da_write_end.generic_perform_write
      0.00 ± -1%      +Inf%      15.40 ±  3%  perf-profile.calltrace.cycles-pp.__block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
      0.00 ± -1%      +Inf%      15.38 ±  3%  perf-profile.calltrace.cycles-pp.__block_write_begin_int.__block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
      0.00 ± -1%      +Inf%       4.40 ± 13%  perf-profile.calltrace.cycles-pp.__copy_user_nocache.pmem_do_bvec.pmem_make_request.generic_make_request.submit_bio
      0.00 ± -1%      +Inf%       2.36 ± 18%  perf-profile.calltrace.cycles-pp.__delete_from_page_cache.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg
      0.00 ± -1%      +Inf%       2.35 ±  5%  perf-profile.calltrace.cycles-pp.__es_insert_extent.ext4_es_insert_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin
      0.00 ± -1%      +Inf%       0.99 ± 12%  perf-profile.calltrace.cycles-pp.__es_shrink.ext4_es_scan.shrink_slab.shrink_node.kswapd
      0.00 ± -1%      +Inf%       0.94 ±  4%  perf-profile.calltrace.cycles-pp.__es_tree_search.ext4_es_insert_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin
      0.00 ± -1%      +Inf%       1.62 ± 13%  perf-profile.calltrace.cycles-pp.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
      0.00 ± -1%      +Inf%       2.98 ±  2%  perf-profile.calltrace.cycles-pp.__find_get_block.__getblk_gfp.__read_extent_tree_block.ext4_find_extent.ext4_ext_map_blocks
      0.00 ± -1%      +Inf%       1.58 ±  2%  perf-profile.calltrace.cycles-pp.__find_get_block_slow.__find_get_block.__getblk_gfp.__read_extent_tree_block.ext4_find_extent
     32.81 ±  5%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
      0.00 ± -1%      +Inf%      29.40 ±  4%  perf-profile.calltrace.cycles-pp.__generic_file_write_iter.ext4_file_write_iter.aio_run_iocb.do_io_submit.sys_io_submit
      0.00 ± -1%      +Inf%       3.17 ±  2%  perf-profile.calltrace.cycles-pp.__getblk_gfp.__read_extent_tree_block.ext4_find_extent.ext4_ext_map_blocks.ext4_da_get_block_prep
      1.22 ±  7%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.__might_sleep.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages
      4.57 ±  2%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.__might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
      0.00 ± -1%      +Inf%       1.53 ±  8%  perf-profile.calltrace.cycles-pp.__page_cache_alloc.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write
      0.00 ± -1%      +Inf%       2.09 ± 19%  perf-profile.calltrace.cycles-pp.__radix_tree_lookup.__delete_from_page_cache.__remove_mapping.shrink_page_list.shrink_inactive_list
     14.60 ±  6%     -95.4%       0.67 ±  4%  perf-profile.calltrace.cycles-pp.__radix_tree_lookup.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.__find_get_block_slow
      0.00 ± -1%      +Inf%       1.65 ±  8%  perf-profile.calltrace.cycles-pp.__radix_tree_lookup.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.grab_cache_page_write_begin
      0.00 ± -1%      +Inf%       3.30 ±  2%  perf-profile.calltrace.cycles-pp.__read_extent_tree_block.ext4_find_extent.ext4_ext_map_blocks.ext4_da_get_block_prep.__block_write_begin_int
      0.00 ± -1%      +Inf%       2.99 ± 16%  perf-profile.calltrace.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
      0.00 ± -1%      +Inf%       1.22 ±  7%  perf-profile.calltrace.cycles-pp.__set_page_dirty.mark_buffer_dirty.__block_commit_write.block_write_end.generic_write_end
      0.00 ± -1%      +Inf%       0.92 ± 14%  perf-profile.calltrace.cycles-pp.__test_set_page_writeback.ext4_bio_write_page.mpage_submit_page.mpage_process_page_bufs.mpage_prepare_extent_to_map
     48.30 ±  5%     -67.4%      15.72 ± 10%  perf-profile.calltrace.cycles-pp.__writeback_inodes_wb.wb_writeback.wb_workfn.process_one_work.worker_thread
     48.30 ±  5%     -67.5%      15.72 ± 10%  perf-profile.calltrace.cycles-pp.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_workfn
      0.00 ± -1%      +Inf%       1.86 ±  8%  perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write
      0.00 ± -1%      +Inf%      31.07 ±  4%  perf-profile.calltrace.cycles-pp.aio_run_iocb.do_io_submit.sys_io_submit.entry_SYSCALL_64_fastpath
      0.00 ± -1%      +Inf%       1.48 ±  7%  perf-profile.calltrace.cycles-pp.alloc_pages_current.__page_cache_alloc.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin
      2.38 ±  7%     -85.1%       0.35 ±100%  perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
      0.00 ± -1%      +Inf%       2.39 ± 13%  perf-profile.calltrace.cycles-pp.bio_endio.pmem_make_request.generic_make_request.submit_bio.ext4_io_submit
      0.00 ± -1%      +Inf%       2.19 ±  4%  perf-profile.calltrace.cycles-pp.block_write_end.generic_write_end.ext4_da_write_end.generic_perform_write.__generic_file_write_iter
     49.54 ±  6%     -16.1%      41.58 ±  5%  perf-profile.calltrace.cycles-pp.call_cpuidle.cpu_startup_entry.start_secondary
      0.00 ± -1%      +Inf%       2.56 ±  4%  perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.aio_run_iocb
     50.10 ±  5%     -16.2%      41.98 ±  5%  perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary
     49.52 ±  6%     -16.0%      41.58 ±  5%  perf-profile.calltrace.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
     46.57 ±  6%     -12.2%      40.89 ±  6%  perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
      0.00 ± -1%      +Inf%      32.31 ±  4%  perf-profile.calltrace.cycles-pp.do_io_submit.sys_io_submit.entry_SYSCALL_64_fastpath
     48.30 ±  5%     -67.5%      15.72 ± 10%  perf-profile.calltrace.cycles-pp.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback
      0.00 ± -1%      +Inf%       1.05 ± 11%  perf-profile.calltrace.cycles-pp.end_page_writeback.ext4_finish_bio.ext4_end_bio.bio_endio.pmem_make_request
      0.32 ±100%  +10536.2%      33.77 ±  4%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_fastpath
      0.00 ± -1%      +Inf%       0.95 ± 13%  perf-profile.calltrace.cycles-pp.es_do_reclaim_extents.es_reclaim_extents.__es_shrink.ext4_es_scan.shrink_slab
      0.00 ± -1%      +Inf%       0.98 ± 12%  perf-profile.calltrace.cycles-pp.es_reclaim_extents.__es_shrink.ext4_es_scan.shrink_slab.shrink_node
      0.00 ± -1%      +Inf%       0.78 ± 25%  perf-profile.calltrace.cycles-pp.ext4_bio_write_page.mpage_submit_page.mpage_map_and_submit_buffers.ext4_writepages.do_writepages
      0.00 ± -1%      +Inf%       8.89 ± 12%  perf-profile.calltrace.cycles-pp.ext4_bio_write_page.mpage_submit_page.mpage_process_page_bufs.mpage_prepare_extent_to_map.ext4_writepages
      0.00 ± -1%      +Inf%      14.36 ±  3%  perf-profile.calltrace.cycles-pp.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin.generic_perform_write
      0.00 ± -1%      +Inf%      22.64 ±  4%  perf-profile.calltrace.cycles-pp.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.aio_run_iocb
      0.00 ± -1%      +Inf%       3.30 ±  5%  perf-profile.calltrace.cycles-pp.ext4_da_write_end.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.aio_run_iocb
      0.00 ± -1%      +Inf%       1.65 ±  6%  perf-profile.calltrace.cycles-pp.ext4_end_bio.bio_endio.pmem_make_request.generic_make_request.submit_bio
      0.00 ± -1%      +Inf%       3.66 ±  3%  perf-profile.calltrace.cycles-pp.ext4_es_insert_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin
      0.00 ± -1%      +Inf%       4.21 ±  5%  perf-profile.calltrace.cycles-pp.ext4_es_lookup_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin
      0.00 ± -1%      +Inf%       0.99 ± 12%  perf-profile.calltrace.cycles-pp.ext4_es_scan.shrink_slab.shrink_node.kswapd.kthread
      0.00 ± -1%      +Inf%       5.80 ±  3%  perf-profile.calltrace.cycles-pp.ext4_ext_map_blocks.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin
     46.91 ±  5%     -97.7%       1.07 ± 24%  perf-profile.calltrace.cycles-pp.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages.__writeback_single_inode
      0.00 ± -1%      +Inf%      29.67 ±  4%  perf-profile.calltrace.cycles-pp.ext4_file_write_iter.aio_run_iocb.do_io_submit.sys_io_submit.entry_SYSCALL_64_fastpath
      0.00 ± -1%      +Inf%       5.40 ±  3%  perf-profile.calltrace.cycles-pp.ext4_find_extent.ext4_ext_map_blocks.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin
      0.00 ± -1%      +Inf%       1.31 ± 11%  perf-profile.calltrace.cycles-pp.ext4_finish_bio.ext4_end_bio.bio_endio.pmem_make_request.generic_make_request
      0.00 ± -1%      +Inf%       6.92 ± 11%  perf-profile.calltrace.cycles-pp.ext4_io_submit.ext4_bio_write_page.mpage_submit_page.mpage_process_page_bufs.mpage_prepare_extent_to_map
      0.00 ± -1%      +Inf%       0.86 ± 24%  perf-profile.calltrace.cycles-pp.ext4_io_submit.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
     47.67 ±  5%     -96.2%       1.81 ± 23%  perf-profile.calltrace.cycles-pp.ext4_map_blocks.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
      0.00 ± -1%      +Inf%       1.10 ± 16%  perf-profile.calltrace.cycles-pp.ext4_releasepage.try_to_release_page.shrink_page_list.shrink_inactive_list.shrink_node_memcg
     48.29 ±  5%     -67.6%      15.63 ± 10%  perf-profile.calltrace.cycles-pp.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb
      1.20 ± 10%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.find_get_entry.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
      0.00 ± -1%      +Inf%       1.21 ±  2%  perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.__find_get_block_slow.__find_get_block.__getblk_gfp
     20.02 ±  6%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks
      0.00 ± -1%      +Inf%       1.70 ±  9%  perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write
      0.00 ± -1%      +Inf%       0.90 ± 12%  perf-profile.calltrace.cycles-pp.find_get_pages_tag.pagevec_lookup_tag.mpage_prepare_extent_to_map.ext4_writepages.do_writepages
      0.00 ± -1%      +Inf%       7.36 ±  8%  perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.ext4_io_submit.ext4_bio_write_page.mpage_submit_page
      0.00 ± -1%      +Inf%       0.86 ± 24%  perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.ext4_io_submit.ext4_writepages.do_writepages
      0.00 ± -1%      +Inf%      29.06 ±  4%  perf-profile.calltrace.cycles-pp.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.aio_run_iocb.do_io_submit
      0.00 ± -1%      +Inf%       2.48 ±  5%  perf-profile.calltrace.cycles-pp.generic_write_end.ext4_da_write_end.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
      0.00 ± -1%      +Inf%       1.15 ±  9%  perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page
      0.00 ± -1%      +Inf%       5.26 ±  8%  perf-profile.calltrace.cycles-pp.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
      0.96 ±  6%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
      0.00 ± -1%      +Inf%       1.34 ± 12%  perf-profile.calltrace.cycles-pp.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
      0.00 ± -1%      +Inf%       1.04 ± 17%  perf-profile.calltrace.cycles-pp.jbd2_journal_try_to_free_buffers.ext4_releasepage.try_to_release_page.shrink_page_list.shrink_inactive_list
      0.00 ± -1%      +Inf%       6.95 ± 14%  perf-profile.calltrace.cycles-pp.kswapd.kthread.ret_from_fork
     48.40 ±  5%     -52.7%      22.88 ±  2%  perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
      0.91 ±  9%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
      0.00 ± -1%      +Inf%       1.57 ±  6%  perf-profile.calltrace.cycles-pp.mark_buffer_dirty.__block_commit_write.block_write_end.generic_write_end.ext4_da_write_end
      0.00 ± -1%      +Inf%       1.34 ± 25%  perf-profile.calltrace.cycles-pp.mpage_map_and_submit_buffers.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
      0.00 ± -1%      +Inf%      11.30 ± 12%  perf-profile.calltrace.cycles-pp.mpage_prepare_extent_to_map.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
      0.00 ± -1%      +Inf%      10.02 ± 11%  perf-profile.calltrace.cycles-pp.mpage_process_page_bufs.mpage_prepare_extent_to_map.ext4_writepages.do_writepages.__writeback_single_inode
      0.00 ± -1%      +Inf%       0.84 ± 25%  perf-profile.calltrace.cycles-pp.mpage_submit_page.mpage_map_and_submit_buffers.ext4_writepages.do_writepages.__writeback_single_inode
      0.00 ± -1%      +Inf%       9.40 ± 12%  perf-profile.calltrace.cycles-pp.mpage_submit_page.mpage_process_page_bufs.mpage_prepare_extent_to_map.ext4_writepages.do_writepages
      0.00 ± -1%      +Inf%       1.28 ±  2%  perf-profile.calltrace.cycles-pp.pagecache_get_page.__find_get_block_slow.__find_get_block.__getblk_gfp.__read_extent_tree_block
     26.12 ±  6%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
      0.00 ± -1%      +Inf%       5.19 ±  8%  perf-profile.calltrace.cycles-pp.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
      0.00 ± -1%      +Inf%       0.91 ± 12%  perf-profile.calltrace.cycles-pp.pagevec_lookup_tag.mpage_prepare_extent_to_map.ext4_writepages.do_writepages.__writeback_single_inode
      0.00 ± -1%      +Inf%       4.59 ±  8%  perf-profile.calltrace.cycles-pp.pmem_do_bvec.pmem_make_request.generic_make_request.submit_bio.ext4_io_submit
      0.00 ± -1%      +Inf%       6.95 ±  8%  perf-profile.calltrace.cycles-pp.pmem_make_request.generic_make_request.submit_bio.ext4_io_submit.ext4_bio_write_page
      0.00 ± -1%      +Inf%       0.84 ± 23%  perf-profile.calltrace.cycles-pp.pmem_make_request.generic_make_request.submit_bio.ext4_io_submit.ext4_writepages
     12.15 ± 46%     -77.5%       2.74 ± 50%  perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
     48.36 ±  5%     -67.4%      15.75 ± 10%  perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
     16.45 ±  6%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata
      0.00 ± -1%      +Inf%       1.66 ±  8%  perf-profile.calltrace.cycles-pp.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin
     48.40 ±  5%     -52.7%      22.88 ±  2%  perf-profile.calltrace.cycles-pp.ret_from_fork
      0.00 ± -1%      +Inf%       5.92 ± 15%  perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.kswapd.kthread
      0.00 ± -1%      +Inf%       6.94 ± 14%  perf-profile.calltrace.cycles-pp.shrink_node.kswapd.kthread.ret_from_fork
      0.00 ± -1%      +Inf%       5.94 ± 15%  perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.kswapd.kthread.ret_from_fork
      0.00 ± -1%      +Inf%       5.37 ± 15%  perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.kswapd
      0.00 ± -1%      +Inf%       1.00 ± 13%  perf-profile.calltrace.cycles-pp.shrink_slab.shrink_node.kswapd.kthread.ret_from_fork
      2.32 ±  8%     -85.1%       0.34 ±100%  perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry
     50.21 ±  5%     -16.3%      42.00 ±  5%  perf-profile.calltrace.cycles-pp.start_secondary
      0.00 ± -1%      +Inf%       6.91 ± 11%  perf-profile.calltrace.cycles-pp.submit_bio.ext4_io_submit.ext4_bio_write_page.mpage_submit_page.mpage_process_page_bufs
      0.00 ± -1%      +Inf%       0.86 ± 24%  perf-profile.calltrace.cycles-pp.submit_bio.ext4_io_submit.ext4_writepages.do_writepages.__writeback_single_inode
      0.00 ± -1%      +Inf%      32.48 ±  4%  perf-profile.calltrace.cycles-pp.sys_io_submit.entry_SYSCALL_64_fastpath
      0.00 ± -1%      +Inf%       0.87 ± 10%  perf-profile.calltrace.cycles-pp.test_clear_page_writeback.end_page_writeback.ext4_finish_bio.ext4_end_bio.bio_endio
      0.00 ± -1%      +Inf%       1.12 ± 17%  perf-profile.calltrace.cycles-pp.try_to_release_page.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
     42.84 ±  5%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages
     48.30 ±  5%     -67.4%      15.72 ± 10%  perf-profile.calltrace.cycles-pp.wb_workfn.process_one_work.worker_thread.kthread.ret_from_fork
     48.30 ±  5%     -67.4%      15.72 ± 10%  perf-profile.calltrace.cycles-pp.wb_writeback.wb_workfn.process_one_work.worker_thread.kthread
     48.36 ±  5%     -67.4%      15.76 ± 10%  perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
     48.30 ±  5%     -67.5%      15.72 ± 10%  perf-profile.calltrace.cycles-pp.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_workfn.process_one_work
      3.93 ±  2%     -84.2%       0.62 ±  2%  perf-profile.children.cycles-pp.___might_sleep
      0.02 ±173%   +7942.9%       1.41 ±  7%  perf-profile.children.cycles-pp.__alloc_pages_nodemask
      0.01 ±173%  +17180.0%       2.16 ±  4%  perf-profile.children.cycles-pp.__block_commit_write
      0.05 ± 62%  +32336.8%      15.41 ±  3%  perf-profile.children.cycles-pp.__block_write_begin
      0.05 ± 62%  +32300.0%      15.39 ±  3%  perf-profile.children.cycles-pp.__block_write_begin_int
      0.18 ± 13%   +2764.4%       5.23 ±  9%  perf-profile.children.cycles-pp.__copy_user_nocache
      0.00 ± -1%      +Inf%       2.36 ± 18%  perf-profile.children.cycles-pp.__delete_from_page_cache
      0.06 ± 64%   +4572.7%       2.57 ±  4%  perf-profile.children.cycles-pp.__es_insert_extent
      0.00 ± -1%      +Inf%       0.99 ± 12%  perf-profile.children.cycles-pp.__es_shrink
      0.00 ± -1%      +Inf%       1.16 ±  2%  perf-profile.children.cycles-pp.__es_tree_search
      0.01 ±173%  +14200.0%       1.79 ± 12%  perf-profile.children.cycles-pp.__ext4_journal_start_sb
      0.00 ± -1%      +Inf%       3.10 ±  2%  perf-profile.children.cycles-pp.__find_get_block
     33.44 ±  5%     -95.1%       1.65 ±  1%  perf-profile.children.cycles-pp.__find_get_block_slow
      0.30 ± 10%   +9542.6%      29.41 ±  4%  perf-profile.children.cycles-pp.__generic_file_write_iter
      0.00 ± -1%      +Inf%       3.31 ±  2%  perf-profile.children.cycles-pp.__getblk_gfp
      5.83 ±  3%     -85.3%       0.86 ±  5%  perf-profile.children.cycles-pp.__might_sleep
      0.01 ±173%  +10133.3%       1.53 ±  8%  perf-profile.children.cycles-pp.__page_cache_alloc
     15.24 ±  6%     -70.8%       4.45 ±  7%  perf-profile.children.cycles-pp.__radix_tree_lookup
      0.00 ± -1%      +Inf%       3.41 ±  2%  perf-profile.children.cycles-pp.__read_extent_tree_block
      0.00 ± -1%      +Inf%       2.99 ± 16%  perf-profile.children.cycles-pp.__remove_mapping
      0.00 ± -1%      +Inf%       1.23 ±  7%  perf-profile.children.cycles-pp.__set_page_dirty
      0.00 ± -1%      +Inf%       1.04 ± 13%  perf-profile.children.cycles-pp.__test_set_page_writeback
     48.30 ±  5%     -67.4%      15.72 ± 10%  perf-profile.children.cycles-pp.__writeback_inodes_wb
     48.30 ±  5%     -67.5%      15.72 ± 10%  perf-profile.children.cycles-pp.__writeback_single_inode
      0.09 ± 17%   +1352.9%       1.23 ±  8%  perf-profile.children.cycles-pp._raw_spin_lock_irqsave
      0.00 ± -1%      +Inf%       1.87 ±  8%  perf-profile.children.cycles-pp.add_to_page_cache_lru
      0.32 ± 12%   +9536.4%      31.08 ±  4%  perf-profile.children.cycles-pp.aio_run_iocb
      0.02 ±173%   +8442.9%       1.50 ±  7%  perf-profile.children.cycles-pp.alloc_pages_current
      2.58 ±  7%     -65.8%       0.88 ± 28%  perf-profile.children.cycles-pp.apic_timer_interrupt
      0.07 ± 66%   +3833.3%       2.95 ±  9%  perf-profile.children.cycles-pp.bio_endio
      0.03 ±100%   +7881.8%       2.20 ±  4%  perf-profile.children.cycles-pp.block_write_end
     50.15 ±  5%     -15.4%      42.45 ±  4%  perf-profile.children.cycles-pp.call_cpuidle
      0.05 ±  9%   +4868.2%       2.73 ±  4%  perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
     50.70 ±  5%     -15.5%      42.85 ±  4%  perf-profile.children.cycles-pp.cpu_startup_entry
     50.11 ±  5%     -15.3%      42.45 ±  4%  perf-profile.children.cycles-pp.cpuidle_enter
     46.86 ±  6%     -11.1%      41.67 ±  5%  perf-profile.children.cycles-pp.cpuidle_enter_state
      0.34 ± 14%   +9334.3%      32.31 ±  4%  perf-profile.children.cycles-pp.do_io_submit
     48.30 ±  5%     -67.5%      15.72 ± 10%  perf-profile.children.cycles-pp.do_writepages
      0.01 ±173%  +10480.0%       1.32 ±  9%  perf-profile.children.cycles-pp.end_page_writeback
      0.64 ± 10%   +5173.9%      33.88 ±  4%  perf-profile.children.cycles-pp.entry_SYSCALL_64_fastpath
      0.00 ± -1%      +Inf%       0.95 ± 13%  perf-profile.children.cycles-pp.es_do_reclaim_extents
      0.00 ± -1%      +Inf%       0.98 ± 12%  perf-profile.children.cycles-pp.es_reclaim_extents
      0.05 ± 61%  +19285.0%       9.69 ± 10%  perf-profile.children.cycles-pp.ext4_bio_write_page
      0.01 ±173%  +1.1e+05%      14.36 ±  3%  perf-profile.children.cycles-pp.ext4_da_get_block_prep
      0.17 ± 10%  +13621.2%      22.64 ±  4%  perf-profile.children.cycles-pp.ext4_da_write_begin
      0.04 ± 59%   +7238.9%       3.30 ±  5%  perf-profile.children.cycles-pp.ext4_da_write_end
      0.04 ±100%   +5253.3%       2.01 ±  6%  perf-profile.children.cycles-pp.ext4_end_bio
      0.09 ± 26%   +4362.2%       4.13 ±  3%  perf-profile.children.cycles-pp.ext4_es_insert_extent
      0.04 ± 60%  +10347.1%       4.44 ±  5%  perf-profile.children.cycles-pp.ext4_es_lookup_extent
      0.00 ± -1%      +Inf%       0.99 ± 12%  perf-profile.children.cycles-pp.ext4_es_scan
     46.91 ±  5%     -85.3%       6.88 ±  6%  perf-profile.children.cycles-pp.ext4_ext_map_blocks
      0.30 ± 11%   +9709.1%      29.67 ±  4%  perf-profile.children.cycles-pp.ext4_file_write_iter
      0.04 ±102%  +15914.3%       5.60 ±  4%  perf-profile.children.cycles-pp.ext4_find_extent
      0.03 ±100%   +5800.0%       1.62 ±  9%  perf-profile.children.cycles-pp.ext4_finish_bio
      0.23 ± 16%   +3540.2%       8.37 ±  9%  perf-profile.children.cycles-pp.ext4_io_submit
     47.67 ±  5%     -96.2%       1.82 ± 23%  perf-profile.children.cycles-pp.ext4_map_blocks
      0.00 ± -1%      +Inf%       0.84 ± 31%  perf-profile.children.cycles-pp.ext4_put_io_end_defer
      0.00 ± -1%      +Inf%       1.10 ± 16%  perf-profile.children.cycles-pp.ext4_releasepage
     48.29 ±  5%     -67.6%      15.63 ± 10%  perf-profile.children.cycles-pp.ext4_writepages
     21.26 ±  6%     -85.9%       2.99 ±  6%  perf-profile.children.cycles-pp.find_get_entry
      0.10 ±  9%    +852.6%       0.91 ± 12%  perf-profile.children.cycles-pp.find_get_pages_tag
      0.30 ± 17%   +2817.5%       8.75 ±  9%  perf-profile.children.cycles-pp.generic_make_request
      0.29 ± 10%   +9924.1%      29.07 ±  4%  perf-profile.children.cycles-pp.generic_perform_write
      0.04 ± 58%   +6118.7%       2.49 ±  5%  perf-profile.children.cycles-pp.generic_write_end
      0.01 ±173%   +9420.0%       1.19 ±  9%  perf-profile.children.cycles-pp.get_page_from_freelist
      0.10 ±  5%   +5450.0%       5.27 ±  8%  perf-profile.children.cycles-pp.grab_cache_page_write_begin
      1.01 ± 10%     -56.3%       0.44 ± 26%  perf-profile.children.cycles-pp.hrtimer_interrupt
      1.55 ±  9%     -69.5%       0.47 ± 26%  perf-profile.children.cycles-pp.irq_exit
      0.00 ± -1%      +Inf%       1.49 ± 11%  perf-profile.children.cycles-pp.jbd2__journal_start
      0.00 ± -1%      +Inf%       1.05 ± 17%  perf-profile.children.cycles-pp.jbd2_journal_try_to_free_buffers
      0.01 ±173%   +8660.0%       1.09 ±  6%  perf-profile.children.cycles-pp.kmem_cache_alloc
      0.00 ± -1%      +Inf%       6.95 ± 14%  perf-profile.children.cycles-pp.kswapd
     48.40 ±  5%     -52.7%      22.88 ±  2%  perf-profile.children.cycles-pp.kthread
      1.04 ± 10%     -56.4%       0.45 ± 27%  perf-profile.children.cycles-pp.local_apic_timer_interrupt
      0.00 ± -1%      +Inf%       1.57 ±  5%  perf-profile.children.cycles-pp.mark_buffer_dirty
      0.21 ±  9%    +524.4%       1.34 ± 25%  perf-profile.children.cycles-pp.mpage_map_and_submit_buffers
      0.12 ± 15%   +9124.5%      11.30 ± 12%  perf-profile.children.cycles-pp.mpage_prepare_extent_to_map
      0.00 ± -1%      +Inf%      10.03 ± 11%  perf-profile.children.cycles-pp.mpage_process_page_bufs
      0.07 ± 58%  +15100.0%      10.26 ± 10%  perf-profile.children.cycles-pp.mpage_submit_page
     26.82 ±  6%     -75.6%       6.53 ±  6%  perf-profile.children.cycles-pp.pagecache_get_page
      0.10 ±  9%    +860.5%       0.91 ± 13%  perf-profile.children.cycles-pp.pagevec_lookup_tag
      0.18 ± 13%   +2830.1%       5.35 ±  9%  perf-profile.children.cycles-pp.pmem_do_bvec
      0.28 ± 17%   +2935.1%       8.42 ±  9%  perf-profile.children.cycles-pp.pmem_make_request
     12.15 ± 46%     -77.3%       2.76 ± 50%  perf-profile.children.cycles-pp.poll_idle
     48.36 ±  5%     -67.4%      15.75 ± 10%  perf-profile.children.cycles-pp.process_one_work
     17.14 ±  6%     -86.0%       2.39 ±  6%  perf-profile.children.cycles-pp.radix_tree_lookup_slot
     48.40 ±  5%     -52.7%      22.88 ±  2%  perf-profile.children.cycles-pp.ret_from_fork
      0.00 ± -1%      +Inf%       5.92 ± 15%  perf-profile.children.cycles-pp.shrink_inactive_list
      0.00 ± -1%      +Inf%       6.94 ± 14%  perf-profile.children.cycles-pp.shrink_node
      0.00 ± -1%      +Inf%       5.94 ± 15%  perf-profile.children.cycles-pp.shrink_node_memcg
      0.00 ± -1%      +Inf%       5.37 ± 15%  perf-profile.children.cycles-pp.shrink_page_list
      0.00 ± -1%      +Inf%       1.00 ± 13%  perf-profile.children.cycles-pp.shrink_slab
      2.51 ±  8%     -65.7%       0.86 ± 28%  perf-profile.children.cycles-pp.smp_apic_timer_interrupt
     50.21 ±  5%     -16.3%      42.00 ±  5%  perf-profile.children.cycles-pp.start_secondary
      0.30 ± 17%   +2828.3%       8.79 ±  9%  perf-profile.children.cycles-pp.submit_bio
      0.35 ± 15%   +9326.8%      32.52 ±  4%  perf-profile.children.cycles-pp.sys_io_submit
      0.00 ± -1%      +Inf%       1.09 ±  9%  perf-profile.children.cycles-pp.test_clear_page_writeback
      0.00 ± -1%      +Inf%       1.13 ± 17%  perf-profile.children.cycles-pp.try_to_release_page
     43.48 ±  5%     -99.9%       0.03 ±100%  perf-profile.children.cycles-pp.unmap_underlying_metadata
     48.30 ±  5%     -67.4%      15.72 ± 10%  perf-profile.children.cycles-pp.wb_workfn
     48.30 ±  5%     -67.4%      15.72 ± 10%  perf-profile.children.cycles-pp.wb_writeback
     48.36 ±  5%     -67.4%      15.76 ± 10%  perf-profile.children.cycles-pp.worker_thread
     48.30 ±  5%     -67.5%      15.72 ± 10%  perf-profile.children.cycles-pp.writeback_sb_inodes
      3.93 ±  2%     -84.2%       0.62 ±  2%  perf-profile.self.cycles-pp.___might_sleep
      0.18 ± 13%   +2764.4%       5.23 ±  9%  perf-profile.self.cycles-pp.__copy_user_nocache
      0.01 ±173%   +9700.0%       1.23 ±  2%  perf-profile.self.cycles-pp.__es_insert_extent
      0.00 ± -1%      +Inf%       1.07 ±  2%  perf-profile.self.cycles-pp.__es_tree_search
      6.10 ±  4%     -95.7%       0.27 ±  5%  perf-profile.self.cycles-pp.__find_get_block_slow
      3.10 ±  6%     -87.1%       0.40 ±  8%  perf-profile.self.cycles-pp.__might_sleep
     15.24 ±  6%     -70.8%       4.45 ±  7%  perf-profile.self.cycles-pp.__radix_tree_lookup
      0.05 ±  9%   +4868.2%       2.73 ±  4%  perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
      0.04 ± 60%   +9776.5%       4.20 ±  5%  perf-profile.self.cycles-pp.ext4_es_lookup_extent
      1.22 ±  7%     -87.1%       0.16 ± 13%  perf-profile.self.cycles-pp.ext4_ext_map_blocks
      0.00 ± -1%      +Inf%       1.86 ±  9%  perf-profile.self.cycles-pp.ext4_find_extent
      0.00 ± -1%      +Inf%       0.84 ± 31%  perf-profile.self.cycles-pp.ext4_put_io_end_defer
      4.18 ±  5%     -85.6%       0.60 ±  4%  perf-profile.self.cycles-pp.find_get_entry
      6.03 ±  6%     -98.1%       0.11 ± 17%  perf-profile.self.cycles-pp.pagecache_get_page
     12.15 ± 46%     -77.3%       2.76 ± 50%  perf-profile.self.cycles-pp.poll_idle
      2.47 ±  4%    -100.0%       0.00 ± -1%  perf-profile.self.cycles-pp.radix_tree_lookup_slot
      4.26 ±  8%    -100.0%       0.00 ± -1%  perf-profile.self.cycles-pp.unmap_underlying_metadata



                                 fio.write_bw_MBps

  3500 ++-------------------------------------------------------------------+
       |       O                                                            |
  3000 ++   O                                                               |
       O  O           O  O    O    O  O O       O         O         O  O    O
  2500 ++        O  O      O     O         O O    O  O  O    O O  O      O  |
       |                                                                    |
  2000 ++                                                                   |
       |                                                                    |
  1500 ++                                                                   |
       |                                                                    |
  1000 ++                                                                   |
       |                                                                    |
   500 ++                                                                   |
       |                                                                    |
     0 *+-*-*--*-*--*-*--*-*--*--*-*--*-*--*-*--*-*--*--*-*--*-*--*-*--*-*--*


                                   fio.write_iops

  900000 ++-----------------------------------------------------------------+
         |                                                                  |
  800000 ++   O O                                                           |
  700000 O+O            O O    O       O O       O         O         O O    O
         |         O O       O    O O       O O    O  O O    O  O O       O |
  600000 ++                                                                 |
  500000 ++                                                                 |
         |                                                                  |
  400000 ++                                                                 |
  300000 ++                                                                 |
         |                                                                  |
  200000 ++                                                                 |
  100000 ++                                                                 |
         |                                                                  |
       0 *+*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*


                               fio.write_clat_mean_us

  60000 ++------------------------------------------------------------------+
        |                .*.       .*.                                      |
  50000 *+.*.*..*.*..*.*.   *..*.*.   *..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*
        |                                                                   |
        |                                                                   |
  40000 ++                                                                  |
        |                                                                   |
  30000 ++                                                                  |
        |                                                                   |
  20000 ++                                                                  |
        |                                                                   |
        |                                                                   |
  10000 ++                                                                  |
        |                                                                   |
      0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O


                                fio.write_clat_stddev

  120000 ++-----------------------------------------------------------------+
         |          .*.. .*..*.  .*.*..                                  .*.|
  100000 *+*..*.*..*    *      *.      *.*..*.*..*.*..*.*..*.*..*.*..*.*.   *
         |                                                                  |
         |                                                                  |
   80000 ++                                                                 |
         |                                                                  |
   60000 ++                                                                 |
         |                                                                  |
   40000 ++                                                                 |
         |                                                                  |
         |                                                                  |
   20000 ++                                                                 |
         O O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O
       0 ++-----------------------------------------------------------------+


                                fio.write_clat_90__us

  200000 ++-----------------------------------------------------------------+
  180000 ++          *..                                                    |
         |   .*.*.. +    .*..*.*..*.*..    .*.*..      .*..*.  .*.    .*..*.*
  160000 *+*.      *    *              *.*.      *.*..*      *.   *..*      |
  140000 ++                                                                 |
         |                                                                  |
  120000 ++                                                                 |
  100000 ++                                                                 |
   80000 ++                                                                 |
         |                                                                  |
   60000 ++                                                                 |
   40000 ++                                                                 |
         |                                                                  |
   20000 ++                                                                 |
       0 O+O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O


                                fio.write_clat_95__us

  250000 ++-----------------------------------------------------------------+
         |                                                                  |
         *.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*
  200000 ++                                                                 |
         |                                                                  |
         |                                                                  |
  150000 ++                                                                 |
         |                                                                  |
  100000 ++                                                                 |
         |                                                                  |
         |                                                                  |
   50000 ++                                                                 |
         |                                                                  |
         |                                                                  |
       0 O+O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O


                                fio.write_clat_99__us

  700000 ++-----------------------------------------------------------------+
         |    *.*    *    *..*    *.*       *.*         *..*    *.*       *.*
  600000 ++  :   :   ::   :  :   :   :     :   :        :  :   :   :     :  |
         |   :   :  : :  :    :  :   :     :   :       :    :  :   :     :  |
  500000 ++ :     : :  : :    : :     :   :     :      :    : :     :   :   |
         |  :     ::   ::      ::     :   :     :     :      ::     :   :   |
  400000 *+*       *    *      *       *.*       *.*..*      *       *.*    |
         |                                                                  |
  300000 ++                                                                 |
         |                                                                  |
  200000 ++                                                                 |
         |                                                                  |
  100000 ++                                                                 |
         O O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O
       0 ++-----------------------------------------------------------------+


                              fio.write_slat_mean_us

  1800 ++----------------*---------*----------------------------------------+
       *..*.*..*.*..*.*.   *..*..*    *.*..*.*..*.*..*..*.*..*.*..*.*..*.*..*
  1600 ++                                                                   |
  1400 ++                                                                   |
       |                                                                    |
  1200 ++                                                                   |
  1000 ++                                                                   |
       |                                                                    |
   800 ++                                                                   |
   600 ++                                                                   |
       |                                                                    |
   400 ++                                                                   |
   200 ++                                                                   |
       |                                                                    |
     0 O+-O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O


                                fio.write_slat_stddev

  18000 ++----------------*-*----*--*---------------------------------------+
        *..*.*..*.*..*.*.      *      *..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*
  16000 ++                                                                  |
  14000 ++                                                                  |
        |                                                                   |
  12000 ++                                                                  |
  10000 ++                                                                  |
        |                                                                   |
   8000 ++                                                                  |
   6000 ++                                                                  |
        |                                                                   |
   4000 ++                                                                  |
   2000 ++                                                                  |
        O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O
      0 ++------------------------------------------------------------------+


        [*] bisect-good sample
        [O] bisect-bad  sample

***************************************************************************************************
lkp-hsw-ep6: 56 threads Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz with 256G memory
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based:
  4k/gcc-6/performance/2pmem/ext4/sync/x86_64-rhel-7.2/50%/debian-x86_64-2016-08-31.cgz/200s/randwrite/lkp-hsw-ep6/200G/fio-basic/tb

commit:
  6f2b562c3a ("direct-io: Use clean_bdev_aliases() instead of handmade iteration")
  adad5aa544 ("ext4: Use clean_bdev_aliases() instead of iteration")

6f2b562c3a89f4a6 adad5aa544e281d84f837b2786
---------------- --------------------------
       fail:runs  %reproduction    fail:runs
           |             |             |    
         %stddev     %change         %stddev
             \          |                \  
     64.51 ±  0%   +3928.3%       2598 ±  1%  fio.write_bw_MBps
     16514 ±  0%   +3928.3%     665261 ±  1%  fio.write_iops
      0.01 ± 34%    +860.0%       0.12 ± 15%  fio.latency_100us%
      0.13 ±  0%     -92.3%       0.01 ±  0%  fio.latency_10ms%
     73.95 ±  3%     -44.9%      40.77 ±  9%  fio.latency_10us%
     22.26 ± 11%    +152.0%      56.10 ±  6%  fio.latency_20us%
      0.99 ±  1%     -99.0%       0.01 ±  0%  fio.latency_250ms%
      1.14 ± 38%     -79.4%       0.23 ± 21%  fio.latency_4us%
      1.50 ± 14%     +78.5%       2.68 ± 15%  fio.latency_50us%
  26441098 ±  0%   +3924.1%  1.064e+09 ±  1%  fio.time.file_system_outputs
    697.50 ± 10%    +751.8%       5941 ± 13%  fio.time.involuntary_context_switches
     35000 ±  2%     +85.3%      64861 ±  9%  fio.time.minor_page_faults
     19.75 ±  2%   +3984.8%     806.75 ±  2%  fio.time.percent_of_cpu_this_job_got
     27.40 ±  2%   +5046.0%       1410 ±  2%  fio.time.system_time
     13.58 ±  4%   +1448.1%     210.31 ±  2%  fio.time.user_time
     58660 ±  1%    +880.4%     575084 ±  1%  fio.time.voluntary_context_switches
     12.00 ±  5%     +25.0%      15.00 ±  4%  fio.write_clat_90%_us
     35088 ±126%     -99.9%      31.50 ±  4%  fio.write_clat_99%_us
      1692 ±  0%     -97.6%      40.74 ±  1%  fio.write_clat_mean_us
     17029 ±  1%     -93.8%       1058 ±  0%  fio.write_clat_stddev
     53709 ±  8%    +262.8%     194831 ±  5%  softirqs.RCU
    131442 ± 21%    +103.9%     267986 ±  1%  softirqs.SCHED
    564652 ±  0%    +160.4%    1470111 ±  1%  softirqs.TIMER
      7.64 ±  1%    +202.8%      23.14 ±  1%  turbostat.%Busy
    160.00 ±  1%    +119.7%     351.50 ±  1%  turbostat.Avg_MHz
      2093 ±  0%     -27.5%       1518 ±  0%  turbostat.Bzy_MHz
     33.02 ±  8%     -21.7%      25.84 ± 15%  turbostat.CPU%c3
     27.55 ±  3%     -36.9%      17.38 ± 24%  turbostat.CPU%c6
    104.45 ±  1%     +10.0%     114.90 ±  1%  turbostat.PkgWatt
     81.34 ±  0%     +44.8%     117.79 ±  0%  turbostat.RAMWatt
    898622 ± 71%   +5898.0%   53899245 ± 30%  numa-numastat.node0.local_node
    266526 ±170%   +6795.0%   18377074 ± 77%  numa-numastat.node0.numa_foreign
    898629 ± 71%   +5897.9%   53899261 ± 30%  numa-numastat.node0.numa_hit
    952875 ± 57%    +971.7%   10212364 ± 77%  numa-numastat.node0.numa_miss
   1759729 ± 31%   +2688.3%   49067006 ± 48%  numa-numastat.node1.local_node
    956501 ± 57%    +966.7%   10203280 ± 77%  numa-numastat.node1.numa_foreign
   1759741 ± 31%   +2688.3%   49067011 ± 48%  numa-numastat.node1.numa_hit
    270153 ±168%   +6699.2%   18368109 ± 77%  numa-numastat.node1.numa_miss
     67997 ±  1%   +4935.1%    3423742 ±  1%  vmstat.io.bo
     19171 ±  0%    +902.5%     192190 ±  0%  vmstat.memory.buff
  12561916 ±  0%    +231.4%   41626714 ±  0%  vmstat.memory.cache
  32814140 ±  0%     -89.1%    3561112 ±  1%  vmstat.memory.free
     27.00 ±  0%     -30.6%      18.75 ±  2%  vmstat.procs.b
      2.00 ±  0%    +425.0%      10.50 ±  4%  vmstat.procs.r
      2267 ±  7%    +425.8%      11922 ±  2%  vmstat.system.cs
     57499 ±  0%      +2.0%      58675 ±  0%  vmstat.system.in
   3757119 ± 10%    +258.1%   13452676 ± 12%  cpuidle.C1-HSW.time
     70143 ± 23%    +684.6%     550376 ±  8%  cpuidle.C1-HSW.usage
 1.719e+08 ± 45%     -73.5%   45531804 ± 32%  cpuidle.C1E-HSW.time
    214536 ± 37%     -51.0%     105146 ± 14%  cpuidle.C1E-HSW.usage
 4.173e+09 ±  6%     -24.9%  3.133e+09 ±  6%  cpuidle.C3-HSW.time
   4335690 ±  6%     -24.1%    3290671 ±  6%  cpuidle.C3-HSW.usage
  6.14e+09 ±  3%      -9.9%  5.529e+09 ±  2%  cpuidle.C6-HSW.time
   6354558 ±  3%      -9.8%    5734069 ±  2%  cpuidle.C6-HSW.usage
      2300 ± 14%    +510.3%      14041 ±  5%  cpuidle.POLL.usage
    164216 ±  1%    +128.1%     374592 ±  1%  meminfo.Active
     59099 ±  2%    +350.9%     266480 ±  1%  meminfo.Active(file)
     19136 ±  0%    +904.0%     192130 ±  0%  meminfo.Buffers
  11734813 ±  0%    +240.2%   39922439 ±  0%  meminfo.Cached
    202840 ±  0%     -76.6%      47427 ±  7%  meminfo.CmaFree
   8241883 ±  0%     -14.8%    7022847 ±  0%  meminfo.Dirty
  11690671 ±  0%    +235.9%   39269224 ±  0%  meminfo.Inactive
  11568749 ±  0%    +238.4%   39147090 ±  0%  meminfo.Inactive(file)
  32820647 ±  0%     -89.1%    3569553 ±  1%  meminfo.MemFree
    820377 ±  0%    +106.7%    1695619 ±  0%  meminfo.SReclaimable
    919835 ±  0%     +95.1%    1794614 ±  0%  meminfo.Slab
    989.25 ± 57%  +57562.5%     570426 ±  0%  meminfo.Unevictable
     71280 ± 25%    +220.1%     228145 ± 33%  numa-meminfo.node0.Active
     29498 ± 24%    +440.1%     159331 ± 41%  numa-meminfo.node0.Active(file)
   5690258 ±  5%    +248.7%   19839495 ±  0%  numa-meminfo.node0.FilePages
   5658227 ±  5%    +242.7%   19392224 ±  0%  numa-meminfo.node0.Inactive
   5571535 ±  5%    +246.5%   19303184 ±  0%  numa-meminfo.node0.Inactive(file)
  16701744 ±  3%     -89.1%    1827775 ±  2%  numa-meminfo.node0.MemFree
   7905066 ±  7%    +188.2%   22779035 ±  0%  numa-meminfo.node0.MemUsed
    235449 ±123%    +278.8%     891948 ± 12%  numa-meminfo.node0.SReclaimable
    288330 ±100%    +228.3%     946566 ± 12%  numa-meminfo.node0.Slab
    467.50 ± 60%  +60926.3%     285297 ±  0%  numa-meminfo.node0.Unevictable
     29610 ± 27%    +260.3%     106675 ± 59%  numa-meminfo.node1.Active(file)
   6066070 ±  5%    +234.0%   20260420 ±  0%  numa-meminfo.node1.FilePages
   6034806 ±  5%    +229.1%   19862918 ±  0%  numa-meminfo.node1.Inactive
   5999586 ±  5%    +230.5%   19829825 ±  0%  numa-meminfo.node1.Inactive(file)
  16116396 ±  4%     -89.1%    1756070 ±  0%  numa-meminfo.node1.MemFree
   8634214 ±  7%    +166.3%   22994540 ±  0%  numa-meminfo.node1.MemUsed
    521.25 ± 60%  +54624.3%     285250 ±  0%  numa-meminfo.node1.Unevictable
   2728768 ±  0%    +259.7%    9814399 ±  0%  slabinfo.buffer_head.active_objs
     69967 ±  0%    +260.0%     251912 ±  0%  slabinfo.buffer_head.active_slabs
   2728768 ±  0%    +260.0%    9824608 ±  0%  slabinfo.buffer_head.num_objs
     69967 ±  0%    +260.0%     251912 ±  0%  slabinfo.buffer_head.num_slabs
    115.25 ± 24%    +785.9%       1021 ±  9%  slabinfo.dquot.active_objs
    115.25 ± 24%    +785.9%       1021 ±  9%  slabinfo.dquot.num_objs
    925018 ±  1%    +162.9%    2432034 ±  0%  slabinfo.ext4_extent_status.active_objs
      9068 ±  1%    +357.6%      41494 ±  1%  slabinfo.ext4_extent_status.active_slabs
    925018 ±  1%    +357.6%    4232438 ±  1%  slabinfo.ext4_extent_status.num_objs
      9068 ±  1%    +357.6%      41494 ±  1%  slabinfo.ext4_extent_status.num_slabs
    149.50 ± 27%    +239.1%     507.00 ± 65%  slabinfo.ext4_io_end.active_objs
    149.50 ± 27%    +239.1%     507.00 ± 65%  slabinfo.ext4_io_end.num_objs
      3009 ± 10%     +53.9%       4631 ±  0%  slabinfo.jbd2_journal_handle.active_objs
      3009 ± 10%     +53.9%       4631 ±  0%  slabinfo.jbd2_journal_handle.num_objs
      1808 ±  9%   +1387.2%      26889 ±  1%  slabinfo.jbd2_journal_head.active_objs
     67.50 ±  4%   +1158.9%     849.75 ±  1%  slabinfo.jbd2_journal_head.active_slabs
      2310 ±  4%   +1151.5%      28909 ±  1%  slabinfo.jbd2_journal_head.num_objs
     67.50 ±  4%   +1158.9%     849.75 ±  1%  slabinfo.jbd2_journal_head.num_slabs
    145655 ±  2%     -77.5%      32809 ±  0%  latency_stats.avg.balance_dirty_pages.balance_dirty_pages_ratelimited.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
      0.00 ± -1%      +Inf%      40251 ±  5%  latency_stats.avg.jbd2_log_wait_commit.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
      0.00 ± -1%      +Inf%      40772 ±  4%  latency_stats.avg.jbd2_log_wait_commit.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter
    245.00 ± 62%  +15014.2%      37029 ± 12%  latency_stats.avg.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.SyS_write
    381.50 ± 92%  +17588.2%      67480 ±  8%  latency_stats.avg.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
     37333 ±  2%    +212.4%     116638 ±  1%  latency_stats.hits.balance_dirty_pages.balance_dirty_pages_ratelimited.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
      0.00 ± -1%      +Inf%     428352 ±  1%  latency_stats.hits.call_rwsem_down_read_failed.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
     52674 ± 19%    +713.2%     428352 ±  1%  latency_stats.hits.max
     25.25 ±173%  +28336.6%       7180 ± 58%  latency_stats.hits.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.do_swap_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
      0.00 ± -1%      +Inf%      52028 ±  3%  latency_stats.max.jbd2_log_wait_commit.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
      0.00 ± -1%      +Inf%      42916 ± 11%  latency_stats.max.jbd2_log_wait_commit.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter
    415.25 ± 61%  +50138.4%     208614 ±  6%  latency_stats.max.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.SyS_write
    406.25 ± 94%  +49880.9%     203047 ±  8%  latency_stats.max.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
      0.00 ± -1%      +Inf%    1808461 ± 11%  latency_stats.sum.call_rwsem_down_read_failed.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
      0.00 ± -1%      +Inf%     824147 ± 12%  latency_stats.sum.jbd2_log_wait_commit.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
      0.00 ± -1%      +Inf%      82526 ± 39%  latency_stats.sum.jbd2_log_wait_commit.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter
    369.00 ±173%  +74142.4%     273954 ± 68%  latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.do_swap_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
      1932 ±109%  +6.4e+05%   12442046 ± 18%  latency_stats.sum.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.SyS_write
      1112 ± 97%  +1.8e+06%   20573361 ±  7%  latency_stats.sum.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
 8.134e+11 ±  7%     -29.2%   5.76e+11 ±  1%  perf-stat.branch-instructions
      0.07 ±  1%   +2199.3%       1.54 ±  1%  perf-stat.branch-miss-rate%
 5.443e+08 ±  6%   +1528.4%  8.863e+09 ±  1%  perf-stat.branch-misses
      3.32 ±  6%   +1005.8%      36.72 ±  2%  perf-stat.cache-miss-rate%
 1.301e+08 ± 12%   +7830.6%  1.032e+10 ±  1%  perf-stat.cache-misses
 3.903e+09 ±  6%    +620.2%  2.811e+10 ±  1%  perf-stat.cache-references
    446946 ±  5%    +439.6%    2411928 ±  2%  perf-stat.context-switches
  1.89e+12 ±  5%    +118.5%   4.13e+12 ±  1%  perf-stat.cpu-cycles
      8650 ±  7%    +161.0%      22573 ±  8%  perf-stat.cpu-migrations
      0.05 ± 33%   +2689.6%       1.38 ±  7%  perf-stat.dTLB-load-miss-rate%
 3.894e+08 ±  5%   +3138.7%  1.261e+10 ±  6%  perf-stat.dTLB-load-misses
      0.01 ± 59%   +1236.0%       0.14 ± 14%  perf-stat.dTLB-store-miss-rate%
  71487525 ±  3%   +1048.6%  8.211e+08 ± 10%  perf-stat.dTLB-store-misses
     58.98 ±  1%     -45.2%      32.33 ±  1%  perf-stat.iTLB-load-miss-rate%
  81063987 ±  1%     -11.6%   71683301 ±  2%  perf-stat.iTLB-load-misses
  56389233 ±  2%    +166.2%  1.501e+08 ±  3%  perf-stat.iTLB-loads
 4.453e+12 ±  7%     -28.5%  3.184e+12 ±  1%  perf-stat.instructions
     54933 ±  7%     -19.1%      44447 ±  3%  perf-stat.instructions-per-iTLB-miss
      2.35 ±  1%     -67.2%       0.77 ±  2%  perf-stat.ipc
    417970 ±  0%      +7.8%     450558 ±  1%  perf-stat.minor-faults
  47679867 ± 13%  +10402.2%  5.007e+09 ± 15%  perf-stat.node-load-misses
  44725841 ±  6%   +8141.8%  3.686e+09 ± 19%  perf-stat.node-loads
     42.89 ± 15%     +26.5%      54.27 ±  5%  perf-stat.node-store-miss-rate%
  12288499 ± 27%   +4184.3%  5.265e+08 ±  5%  perf-stat.node-store-misses
  15883566 ±  5%   +2695.6%   4.44e+08 ±  7%  perf-stat.node-stores
    417971 ±  0%      +7.8%     450593 ±  1%  perf-stat.page-faults
      7366 ± 24%    +440.8%      39839 ± 41%  numa-vmstat.node0.nr_active_file
   1317901 ±  5%   +1806.1%   25120768 ± 13%  numa-vmstat.node0.nr_dirtied
   1421455 ±  5%    +248.9%    4959834 ±  0%  numa-vmstat.node0.nr_file_pages
   4176580 ±  3%     -89.1%     456964 ±  2%  numa-vmstat.node0.nr_free_pages
   1391778 ±  5%    +246.7%    4825746 ±  0%  numa-vmstat.node0.nr_inactive_file
     58818 ±123%    +279.1%     222990 ± 12%  numa-vmstat.node0.nr_slab_reclaimable
    116.00 ± 60%  +61385.8%      71323 ±  0%  numa-vmstat.node0.nr_unevictable
    328581 ±  4%   +7288.0%   24275709 ± 13%  numa-vmstat.node0.nr_written
      7366 ± 24%    +440.8%      39840 ± 41%  numa-vmstat.node0.nr_zone_active_file
   1391778 ±  5%    +246.7%    4825787 ±  0%  numa-vmstat.node0.nr_zone_inactive_file
    116.00 ± 60%  +61386.0%      71323 ±  0%  numa-vmstat.node0.nr_zone_unevictable
    320855 ±126%   +2018.4%    6796862 ± 81%  numa-vmstat.node0.numa_foreign
    811358 ± 69%   +2361.3%   19970097 ± 36%  numa-vmstat.node0.numa_hit
    811349 ± 69%   +2361.3%   19970080 ± 36%  numa-vmstat.node0.numa_local
    902242 ± 54%    +496.3%    5379824 ± 70%  numa-vmstat.node0.numa_miss
      7395 ± 27%    +260.7%      26675 ± 59%  numa-vmstat.node1.nr_active_file
   1475724 ±  5%   +1764.2%   27510187 ± 12%  numa-vmstat.node1.nr_dirtied
   1515054 ±  5%    +234.3%    5065379 ±  0%  numa-vmstat.node1.nr_file_pages
     50710 ±  0%     -76.1%      12111 ±  8%  numa-vmstat.node1.nr_free_cma
   4030612 ±  4%     -89.1%     438738 ±  0%  numa-vmstat.node1.nr_free_pages
   1498440 ±  5%    +230.9%    4957711 ±  0%  numa-vmstat.node1.nr_inactive_file
    129.75 ± 60%  +54860.9%      71311 ±  0%  numa-vmstat.node1.nr_unevictable
    403988 ±  4%   +6484.2%   26599507 ± 12%  numa-vmstat.node1.nr_written
      7395 ± 27%    +260.7%      26675 ± 59%  numa-vmstat.node1.nr_zone_active_file
   1498440 ±  5%    +230.9%    4957710 ±  0%  numa-vmstat.node1.nr_zone_inactive_file
    129.75 ± 60%  +54860.9%      71311 ±  0%  numa-vmstat.node1.nr_zone_unevictable
    872187 ± 53%    +506.1%    5286095 ± 71%  numa-vmstat.node1.numa_foreign
   1639879 ± 30%   +1191.0%   21171288 ± 43%  numa-vmstat.node1.numa_hit
   1639864 ± 30%   +1191.0%   21171282 ± 43%  numa-vmstat.node1.numa_local
    290798 ±150%   +2205.1%    6703187 ± 82%  numa-vmstat.node1.numa_miss
     14775 ±  2%    +351.0%      66631 ±  1%  proc-vmstat.nr_active_file
   3314070 ±  0%   +3927.8%  1.335e+08 ±  1%  proc-vmstat.nr_dirtied
   2060410 ±  0%     -14.8%    1755444 ±  0%  proc-vmstat.nr_dirty
   2938721 ±  0%    +241.3%   10029056 ±  0%  proc-vmstat.nr_file_pages
     50710 ±  0%     -76.5%      11910 ±  8%  proc-vmstat.nr_free_cma
   8204888 ±  0%     -89.1%     891960 ±  1%  proc-vmstat.nr_free_pages
   2892431 ±  0%    +238.4%    9787283 ±  0%  proc-vmstat.nr_inactive_file
    205104 ±  0%    +106.7%     423919 ±  0%  proc-vmstat.nr_slab_reclaimable
    247.25 ± 57%  +57596.0%     142653 ±  0%  proc-vmstat.nr_unevictable
   1398737 ±  1%   +9324.0%  1.318e+08 ±  1%  proc-vmstat.nr_written
     14775 ±  2%    +351.0%      66633 ±  1%  proc-vmstat.nr_zone_active_file
   2892431 ±  0%    +238.4%    9787320 ±  0%  proc-vmstat.nr_zone_inactive_file
    247.25 ± 57%  +57596.1%     142653 ±  0%  proc-vmstat.nr_zone_unevictable
   2060409 ±  0%     -14.8%    1755458 ±  0%  proc-vmstat.nr_zone_write_pending
   1223028 ±  9%   +2245.9%   28690632 ± 30%  proc-vmstat.numa_foreign
    282.75 ±109%   +8696.6%      24872 ± 11%  proc-vmstat.numa_hint_faults
     82.00 ± 56%  +23385.1%      19257 ± 13%  proc-vmstat.numa_hint_faults_local
   2660314 ±  3%   +3794.0%  1.036e+08 ± 10%  proc-vmstat.numa_hit
   2660293 ±  3%   +3794.0%  1.036e+08 ± 10%  proc-vmstat.numa_local
   1223028 ±  9%   +2245.9%   28690610 ± 30%  proc-vmstat.numa_miss
    195.75 ±143%    +693.5%       1553 ± 22%  proc-vmstat.numa_pages_migrated
      1185 ±115%   +2435.2%      30048 ± 10%  proc-vmstat.numa_pte_updates
      2845 ±  6%     +57.4%       4479 ± 13%  proc-vmstat.pgactivate
      0.00 ±  0%      +Inf%    5194751 ± 13%  proc-vmstat.pgalloc_dma32
   4015781 ±  0%   +3068.0%  1.272e+08 ±  1%  proc-vmstat.pgalloc_normal
    443537 ±  0%  +27265.1%  1.214e+08 ±  1%  proc-vmstat.pgfree
      0.75 ±173%  +16200.0%     122.25 ± 42%  proc-vmstat.pgmigrate_fail
    195.75 ±143%    +693.5%       1553 ± 22%  proc-vmstat.pgmigrate_success
  13772223 ±  1%   +4931.5%   6.93e+08 ±  1%  proc-vmstat.pgpgout
    364.25 ± 57%  +46700.3%     170470 ±  0%  proc-vmstat.unevictable_pgs_culled
      3771 ±  0%    +283.2%      14453 ±  1%  sched_debug.cfs_rq:/.exec_clock.avg
    123.31 ± 22%    +863.5%       1188 ± 30%  sched_debug.cfs_rq:/.exec_clock.min
     15977 ±  4%     -28.2%      11469 ± 17%  sched_debug.cfs_rq:/.exec_clock.stddev
     41685 ±  1%    +345.4%     185655 ±  7%  sched_debug.cfs_rq:/.load.avg
    161304 ±  1%     +95.9%     315971 ±  4%  sched_debug.cfs_rq:/.load.stddev
     32.87 ±  4%    +435.3%     175.97 ±  6%  sched_debug.cfs_rq:/.load_avg.avg
    157.72 ±  7%     +28.4%     202.48 ±  3%  sched_debug.cfs_rq:/.load_avg.stddev
     18584 ± 14%     +47.5%      27418 ±  5%  sched_debug.cfs_rq:/.min_vruntime.avg
     16749 ±  3%     -28.6%      11963 ± 16%  sched_debug.cfs_rq:/.min_vruntime.stddev
      0.10 ± 17%    +136.0%       0.23 ±  6%  sched_debug.cfs_rq:/.nr_running.avg
      0.29 ±  6%     +43.8%       0.41 ±  3%  sched_debug.cfs_rq:/.nr_running.stddev
     29.02 ±  1%    +220.3%      92.95 ±  9%  sched_debug.cfs_rq:/.runnable_load_avg.avg
    141.32 ±  0%     +35.8%     191.91 ±  4%  sched_debug.cfs_rq:/.runnable_load_avg.stddev
     83254 ±  2%     -38.9%      50853 ± 35%  sched_debug.cfs_rq:/.spread0.max
    -12291 ±-14%     +72.7%     -21225 ±-23%  sched_debug.cfs_rq:/.spread0.min
     16749 ±  3%     -28.6%      11966 ± 16%  sched_debug.cfs_rq:/.spread0.stddev
    185.76 ±  4%     +85.4%     344.36 ±  2%  sched_debug.cfs_rq:/.util_avg.avg
    158.08 ±  1%     +37.9%     217.92 ±  3%  sched_debug.cfs_rq:/.util_avg.stddev
    182455 ±  4%     +24.2%     226649 ± 10%  sched_debug.cpu.avg_idle.stddev
      5.84 ±  8%    +163.5%      15.38 ±  7%  sched_debug.cpu.clock.stddev
      5.84 ±  8%    +163.5%      15.38 ±  7%  sched_debug.cpu.clock_task.stddev
     28.76 ±  1%    +216.9%      91.13 ± 11%  sched_debug.cpu.cpu_load[0].avg
    141.31 ±  0%     +35.0%     190.72 ±  5%  sched_debug.cpu.cpu_load[0].stddev
     29.02 ±  1%    +352.8%     131.42 ± 15%  sched_debug.cpu.cpu_load[1].avg
    141.76 ±  0%     +41.5%     200.56 ±  5%  sched_debug.cpu.cpu_load[1].stddev
     28.81 ±  1%    +343.7%     127.84 ± 15%  sched_debug.cpu.cpu_load[2].avg
    141.36 ±  0%     +39.6%     197.31 ±  5%  sched_debug.cpu.cpu_load[2].stddev
     28.54 ±  1%    +324.6%     121.20 ± 15%  sched_debug.cpu.cpu_load[3].avg
    141.27 ±  0%     +35.5%     191.48 ±  5%  sched_debug.cpu.cpu_load[3].stddev
     28.22 ±  0%    +299.2%     112.65 ± 16%  sched_debug.cpu.cpu_load[4].avg
    141.27 ±  0%     +29.2%     182.54 ±  6%  sched_debug.cpu.cpu_load[4].stddev
    191.26 ± 18%    +141.2%     461.32 ±  7%  sched_debug.cpu.curr->pid.avg
    676.82 ± 11%     +35.3%     915.48 ±  5%  sched_debug.cpu.curr->pid.stddev
     43084 ±  5%    +336.3%     187993 ±  7%  sched_debug.cpu.load.avg
    164119 ±  2%     +93.1%     316944 ±  5%  sched_debug.cpu.load.stddev
      0.00 ±  9%     +52.3%       0.00 ±  2%  sched_debug.cpu.next_balance.stddev
     22918 ± 44%     +69.8%      38923 ±  1%  sched_debug.cpu.nr_load_updates.avg
      8553 ± 24%    +140.5%      20569 ± 24%  sched_debug.cpu.nr_load_updates.min
     14186 ± 11%     -36.9%       8947 ± 13%  sched_debug.cpu.nr_load_updates.stddev
      0.11 ± 20%    +112.0%       0.24 ±  7%  sched_debug.cpu.nr_running.avg
      0.32 ± 11%     +28.4%       0.41 ±  4%  sched_debug.cpu.nr_running.stddev
      5869 ±  5%    +320.3%      24672 ±  2%  sched_debug.cpu.nr_switches.avg
     29766 ± 33%    +538.3%     190010 ± 25%  sched_debug.cpu.nr_switches.max
      1914 ±  4%    +110.7%       4034 ± 27%  sched_debug.cpu.nr_switches.min
      4636 ± 27%    +560.1%      30603 ± 23%  sched_debug.cpu.nr_switches.stddev
      0.39 ±  1%     -33.8%       0.26 ±  5%  sched_debug.cpu.nr_uninterruptible.avg
     13.12 ± 18%    +116.2%      28.38 ± 24%  sched_debug.cpu.nr_uninterruptible.max
    -15.00 ±-30%    +110.8%     -31.62 ±-22%  sched_debug.cpu.nr_uninterruptible.min
      4.87 ± 14%    +163.9%      12.86 ± 27%  sched_debug.cpu.nr_uninterruptible.stddev
      3796 ±  9%    +494.6%      22569 ±  2%  sched_debug.cpu.sched_count.avg
     26488 ± 37%    +606.2%     187066 ± 26%  sched_debug.cpu.sched_count.max
    190.62 ± 24%   +1031.8%       2157 ± 55%  sched_debug.cpu.sched_count.min
      4339 ± 30%    +601.8%      30453 ± 23%  sched_debug.cpu.sched_count.stddev
      1804 ±  9%    +514.1%      11080 ±  2%  sched_debug.cpu.sched_goidle.avg
     13206 ± 37%    +606.5%      93305 ± 26%  sched_debug.cpu.sched_goidle.max
     45.06 ± 16%   +2154.2%       1015 ± 59%  sched_debug.cpu.sched_goidle.min
      2184 ± 30%    +595.6%      15196 ± 23%  sched_debug.cpu.sched_goidle.stddev
      1822 ±  9%    +515.7%      11222 ±  2%  sched_debug.cpu.ttwu_count.avg
     15326 ± 33%    +524.9%      95775 ± 19%  sched_debug.cpu.ttwu_count.max
     66.31 ± 22%   +1399.6%     994.44 ± 71%  sched_debug.cpu.ttwu_count.min
      2476 ± 28%    +538.9%      15824 ± 16%  sched_debug.cpu.ttwu_count.stddev
      1013 ±  0%     +78.4%       1807 ±  0%  sched_debug.cpu.ttwu_local.avg
      7952 ± 25%     -43.3%       4512 ± 29%  sched_debug.cpu.ttwu_local.max
     35.31 ± 23%    +506.5%     214.19 ± 50%  sched_debug.cpu.ttwu_local.min
      1311 ± 16%     -32.0%     891.49 ± 15%  sched_debug.cpu.ttwu_local.stddev
      2.58 ±  9%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.___might_sleep.__might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
      1.32 ± 10%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.___might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
      0.00 ± -1%      +Inf%       0.93 ±  4%  perf-profile.calltrace.cycles-pp.__add_to_page_cache_locked.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin
      0.00 ± -1%      +Inf%       1.43 ± 11%  perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page.grab_cache_page_write_begin
      0.00 ± -1%      +Inf%       2.30 ±  1%  perf-profile.calltrace.cycles-pp.__block_commit_write.block_write_end.generic_write_end.ext4_da_write_end.generic_perform_write
      0.00 ± -1%      +Inf%      16.00 ±  3%  perf-profile.calltrace.cycles-pp.__block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
      0.00 ± -1%      +Inf%      15.98 ±  3%  perf-profile.calltrace.cycles-pp.__block_write_begin_int.__block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
      0.00 ± -1%      +Inf%       4.97 ±  3%  perf-profile.calltrace.cycles-pp.__copy_user_nocache.pmem_do_bvec.pmem_make_request.generic_make_request.submit_bio
      0.00 ± -1%      +Inf%       2.29 ±  3%  perf-profile.calltrace.cycles-pp.__delete_from_page_cache.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg
      0.00 ± -1%      +Inf%       2.38 ±  6%  perf-profile.calltrace.cycles-pp.__es_insert_extent.ext4_es_insert_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin
      0.00 ± -1%      +Inf%       1.03 ±  5%  perf-profile.calltrace.cycles-pp.__es_tree_search.ext4_es_insert_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin
      0.00 ± -1%      +Inf%       1.76 ±  6%  perf-profile.calltrace.cycles-pp.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
      0.00 ± -1%      +Inf%       3.23 ±  3%  perf-profile.calltrace.cycles-pp.__find_get_block.__getblk_gfp.__read_extent_tree_block.ext4_find_extent.ext4_ext_map_blocks
      0.00 ± -1%      +Inf%       1.70 ±  3%  perf-profile.calltrace.cycles-pp.__find_get_block_slow.__find_get_block.__getblk_gfp.__read_extent_tree_block.ext4_find_extent
     35.05 ±  7%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
      0.00 ± -1%      +Inf%      30.01 ±  3%  perf-profile.calltrace.cycles-pp.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.sys_write
      0.00 ± -1%      +Inf%       3.42 ±  3%  perf-profile.calltrace.cycles-pp.__getblk_gfp.__read_extent_tree_block.ext4_find_extent.ext4_ext_map_blocks.ext4_da_get_block_prep
      1.32 ±  5%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.__might_sleep.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages
      4.51 ±  9%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.__might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
      0.00 ± -1%      +Inf%       1.56 ± 11%  perf-profile.calltrace.cycles-pp.__page_cache_alloc.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write
      0.00 ± -1%      +Inf%       2.00 ±  4%  perf-profile.calltrace.cycles-pp.__radix_tree_lookup.__delete_from_page_cache.__remove_mapping.shrink_page_list.shrink_inactive_list
     15.27 ±  6%     -95.3%       0.73 ±  5%  perf-profile.calltrace.cycles-pp.__radix_tree_lookup.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.__find_get_block_slow
      0.00 ± -1%      +Inf%       1.62 ±  4%  perf-profile.calltrace.cycles-pp.__radix_tree_lookup.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.grab_cache_page_write_begin
      0.00 ± -1%      +Inf%       3.56 ±  3%  perf-profile.calltrace.cycles-pp.__read_extent_tree_block.ext4_find_extent.ext4_ext_map_blocks.ext4_da_get_block_prep.__block_write_begin_int
      0.00 ± -1%      +Inf%       2.90 ±  3%  perf-profile.calltrace.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
      0.00 ± -1%      +Inf%       1.27 ±  2%  perf-profile.calltrace.cycles-pp.__set_page_dirty.mark_buffer_dirty.__block_commit_write.block_write_end.generic_write_end
      0.00 ± -1%      +Inf%       0.96 ±  9%  perf-profile.calltrace.cycles-pp.__test_set_page_writeback.ext4_bio_write_page.mpage_submit_page.mpage_process_page_bufs.mpage_prepare_extent_to_map
      0.82 ± 14%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.__tick_nohz_idle_enter.tick_nohz_irq_exit.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
      0.00 ± -1%      +Inf%      30.45 ±  3%  perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
     51.11 ±  7%     -65.2%      17.80 ±  2%  perf-profile.calltrace.cycles-pp.__writeback_inodes_wb.wb_writeback.wb_workfn.process_one_work.worker_thread
     51.11 ±  7%     -65.2%      17.80 ±  2%  perf-profile.calltrace.cycles-pp.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_workfn
      0.00 ± -1%      +Inf%       1.77 ±  5%  perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write
      0.00 ± -1%      +Inf%       1.53 ± 10%  perf-profile.calltrace.cycles-pp.alloc_pages_current.__page_cache_alloc.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin
      2.79 ± 16%     -83.8%       0.45 ± 62%  perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
      0.00 ± -1%      +Inf%       2.42 ±  3%  perf-profile.calltrace.cycles-pp.bio_endio.pmem_make_request.generic_make_request.submit_bio.ext4_io_submit
      0.00 ± -1%      +Inf%       2.35 ±  1%  perf-profile.calltrace.cycles-pp.block_write_end.generic_write_end.ext4_da_write_end.generic_perform_write.__generic_file_write_iter
      0.00 ± -1%      +Inf%       2.19 ±  2%  perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
     51.11 ±  7%     -65.2%      17.80 ±  2%  perf-profile.calltrace.cycles-pp.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback
      0.00 ± -1%      +Inf%       1.04 ±  4%  perf-profile.calltrace.cycles-pp.end_page_writeback.ext4_finish_bio.ext4_end_bio.bio_endio.pmem_make_request
      0.48 ± 58%   +6534.9%      31.85 ±  3%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_fastpath
      0.00 ± -1%      +Inf%       1.03 ±  2%  perf-profile.calltrace.cycles-pp.ext4_bio_write_page.mpage_submit_page.mpage_map_and_submit_buffers.ext4_writepages.do_writepages
      0.00 ± -1%      +Inf%       9.22 ±  3%  perf-profile.calltrace.cycles-pp.ext4_bio_write_page.mpage_submit_page.mpage_process_page_bufs.mpage_prepare_extent_to_map.ext4_writepages
      0.00 ± -1%      +Inf%      14.90 ±  3%  perf-profile.calltrace.cycles-pp.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin.generic_perform_write
      0.00 ± -1%      +Inf%      23.29 ±  4%  perf-profile.calltrace.cycles-pp.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
      0.00 ± -1%      +Inf%       3.61 ±  1%  perf-profile.calltrace.cycles-pp.ext4_da_write_end.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
      0.00 ± -1%      +Inf%       1.53 ±  3%  perf-profile.calltrace.cycles-pp.ext4_end_bio.bio_endio.pmem_make_request.generic_make_request.submit_bio
      0.00 ± -1%      +Inf%       3.80 ±  5%  perf-profile.calltrace.cycles-pp.ext4_es_insert_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin
      0.00 ± -1%      +Inf%       4.19 ±  4%  perf-profile.calltrace.cycles-pp.ext4_es_lookup_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin
      0.00 ± -1%      +Inf%       6.14 ±  3%  perf-profile.calltrace.cycles-pp.ext4_ext_map_blocks.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin
     49.67 ±  7%     -97.1%       1.42 ±  2%  perf-profile.calltrace.cycles-pp.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages.__writeback_single_inode
      0.00 ± -1%      +Inf%      30.29 ±  3%  perf-profile.calltrace.cycles-pp.ext4_file_write_iter.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
      0.00 ± -1%      +Inf%       5.73 ±  3%  perf-profile.calltrace.cycles-pp.ext4_find_extent.ext4_ext_map_blocks.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin
      0.00 ± -1%      +Inf%       1.33 ±  4%  perf-profile.calltrace.cycles-pp.ext4_finish_bio.ext4_end_bio.bio_endio.pmem_make_request.generic_make_request
      0.00 ± -1%      +Inf%       7.16 ±  2%  perf-profile.calltrace.cycles-pp.ext4_io_submit.ext4_bio_write_page.mpage_submit_page.mpage_process_page_bufs.mpage_prepare_extent_to_map
      0.00 ± -1%      +Inf%       1.17 ±  1%  perf-profile.calltrace.cycles-pp.ext4_io_submit.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
     50.46 ±  7%     -95.1%       2.46 ±  2%  perf-profile.calltrace.cycles-pp.ext4_map_blocks.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
      0.00 ± -1%      +Inf%       0.88 ± 13%  perf-profile.calltrace.cycles-pp.ext4_put_io_end_defer.bio_endio.pmem_make_request.generic_make_request.submit_bio
      0.00 ± -1%      +Inf%       1.18 ±  2%  perf-profile.calltrace.cycles-pp.ext4_releasepage.try_to_release_page.shrink_page_list.shrink_inactive_list.shrink_node_memcg
     51.11 ±  7%     -65.3%      17.72 ±  2%  perf-profile.calltrace.cycles-pp.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb
      1.32 ±  6%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.find_get_entry.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
      0.00 ± -1%      +Inf%       1.32 ±  2%  perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.__find_get_block_slow.__find_get_block.__getblk_gfp
     21.16 ±  6%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks
      0.00 ± -1%      +Inf%       1.66 ±  5%  perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write
      0.00 ± -1%      +Inf%       0.98 ±  2%  perf-profile.calltrace.cycles-pp.find_get_pages_tag.pagevec_lookup_tag.mpage_prepare_extent_to_map.ext4_writepages.do_writepages
      0.00 ± -1%      +Inf%       7.88 ±  2%  perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.ext4_io_submit.ext4_bio_write_page.mpage_submit_page
      0.00 ± -1%      +Inf%       1.16 ±  1%  perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.ext4_io_submit.ext4_writepages.do_writepages
      0.00 ± -1%      +Inf%      29.66 ±  3%  perf-profile.calltrace.cycles-pp.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write
      0.00 ± -1%      +Inf%       2.72 ±  2%  perf-profile.calltrace.cycles-pp.generic_write_end.ext4_da_write_end.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
      0.00 ± -1%      +Inf%       1.16 ± 10%  perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page
      0.00 ± -1%      +Inf%       5.13 ±  7%  perf-profile.calltrace.cycles-pp.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
      1.02 ± 20%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter
      1.04 ± 11%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
      0.00 ± -1%      +Inf%       1.44 ±  5%  perf-profile.calltrace.cycles-pp.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
      0.00 ± -1%      +Inf%       1.11 ±  2%  perf-profile.calltrace.cycles-pp.jbd2_journal_try_to_free_buffers.ext4_releasepage.try_to_release_page.shrink_page_list.shrink_inactive_list
      0.00 ± -1%      +Inf%       6.87 ±  1%  perf-profile.calltrace.cycles-pp.kswapd.kthread.ret_from_fork
     51.25 ±  7%     -51.4%      24.89 ±  1%  perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
      1.08 ± 19%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
      0.00 ± -1%      +Inf%       1.65 ±  2%  perf-profile.calltrace.cycles-pp.mark_buffer_dirty.__block_commit_write.block_write_end.generic_write_end.ext4_da_write_end
      0.00 ± -1%      +Inf%       1.80 ±  1%  perf-profile.calltrace.cycles-pp.mpage_map_and_submit_buffers.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
      0.00 ± -1%      +Inf%      11.87 ±  2%  perf-profile.calltrace.cycles-pp.mpage_prepare_extent_to_map.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
      0.00 ± -1%      +Inf%      10.50 ±  3%  perf-profile.calltrace.cycles-pp.mpage_process_page_bufs.mpage_prepare_extent_to_map.ext4_writepages.do_writepages.__writeback_single_inode
      0.00 ± -1%      +Inf%       1.11 ±  1%  perf-profile.calltrace.cycles-pp.mpage_submit_page.mpage_map_and_submit_buffers.ext4_writepages.do_writepages.__writeback_single_inode
      0.00 ± -1%      +Inf%       9.77 ±  3%  perf-profile.calltrace.cycles-pp.mpage_submit_page.mpage_process_page_bufs.mpage_prepare_extent_to_map.ext4_writepages.do_writepages
      0.00 ± -1%      +Inf%       1.39 ±  2%  perf-profile.calltrace.cycles-pp.pagecache_get_page.__find_get_block_slow.__find_get_block.__getblk_gfp.__read_extent_tree_block
     27.73 ±  7%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
      0.00 ± -1%      +Inf%       5.07 ±  6%  perf-profile.calltrace.cycles-pp.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
      0.00 ± -1%      +Inf%       0.98 ±  3%  perf-profile.calltrace.cycles-pp.pagevec_lookup_tag.mpage_prepare_extent_to_map.ext4_writepages.do_writepages.__writeback_single_inode
      0.00 ± -1%      +Inf%       5.05 ±  3%  perf-profile.calltrace.cycles-pp.pmem_do_bvec.pmem_make_request.generic_make_request.submit_bio.ext4_io_submit
      0.00 ± -1%      +Inf%       7.58 ±  2%  perf-profile.calltrace.cycles-pp.pmem_make_request.generic_make_request.submit_bio.ext4_io_submit.ext4_bio_write_page
      0.00 ± -1%      +Inf%       1.13 ±  1%  perf-profile.calltrace.cycles-pp.pmem_make_request.generic_make_request.submit_bio.ext4_io_submit.ext4_writepages
     51.22 ±  7%     -65.1%      17.86 ±  2%  perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
     17.22 ±  6%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata
      0.00 ± -1%      +Inf%       1.64 ±  4%  perf-profile.calltrace.cycles-pp.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin
     51.25 ±  7%     -51.4%      24.89 ±  1%  perf-profile.calltrace.cycles-pp.ret_from_fork
      0.00 ± -1%      +Inf%       5.95 ±  1%  perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.kswapd.kthread
      0.00 ± -1%      +Inf%       6.86 ±  1%  perf-profile.calltrace.cycles-pp.shrink_node.kswapd.kthread.ret_from_fork
      0.00 ± -1%      +Inf%       5.98 ±  1%  perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.kswapd.kthread.ret_from_fork
      0.00 ± -1%      +Inf%       5.36 ±  1%  perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.kswapd
      2.73 ± 16%     -88.4%       0.32 ±103%  perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry
      0.00 ± -1%      +Inf%       7.14 ±  2%  perf-profile.calltrace.cycles-pp.submit_bio.ext4_io_submit.ext4_bio_write_page.mpage_submit_page.mpage_process_page_bufs
      0.00 ± -1%      +Inf%       1.17 ±  1%  perf-profile.calltrace.cycles-pp.submit_bio.ext4_io_submit.ext4_writepages.do_writepages.__writeback_single_inode
      0.00 ± -1%      +Inf%      31.25 ±  3%  perf-profile.calltrace.cycles-pp.sys_write.entry_SYSCALL_64_fastpath
      0.83 ± 14%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.tick_nohz_irq_exit.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter
      0.00 ± -1%      +Inf%       1.20 ±  1%  perf-profile.calltrace.cycles-pp.try_to_release_page.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
     45.39 ±  7%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages
      0.00 ± -1%      +Inf%      31.09 ±  3%  perf-profile.calltrace.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath
     51.11 ±  7%     -65.2%      17.80 ±  2%  perf-profile.calltrace.cycles-pp.wb_workfn.process_one_work.worker_thread.kthread.ret_from_fork
     51.11 ±  7%     -65.2%      17.80 ±  2%  perf-profile.calltrace.cycles-pp.wb_writeback.wb_workfn.process_one_work.worker_thread.kthread
     51.23 ±  7%     -65.1%      17.86 ±  2%  perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
     51.11 ±  7%     -65.2%      17.80 ±  2%  perf-profile.calltrace.cycles-pp.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_workfn.process_one_work
      3.93 ±  9%     -88.6%       0.45 ±  6%  perf-profile.children.cycles-pp.___might_sleep
      0.00 ± -1%      +Inf%       0.93 ±  5%  perf-profile.children.cycles-pp.__add_to_page_cache_locked
      0.00 ± -1%      +Inf%       1.46 ± 10%  perf-profile.children.cycles-pp.__alloc_pages_nodemask
      0.04 ± 58%   +5668.8%       2.31 ±  1%  perf-profile.children.cycles-pp.__block_commit_write
      0.07 ± 23%  +24519.2%      16.00 ±  3%  perf-profile.children.cycles-pp.__block_write_begin
      0.07 ± 23%  +24488.5%      15.98 ±  3%  perf-profile.children.cycles-pp.__block_write_begin_int
      0.21 ±  5%   +2617.6%       5.78 ±  2%  perf-profile.children.cycles-pp.__copy_user_nocache
      0.00 ± -1%      +Inf%       2.29 ±  3%  perf-profile.children.cycles-pp.__delete_from_page_cache
      0.05 ± 58%   +5573.7%       2.69 ±  6%  perf-profile.children.cycles-pp.__es_insert_extent
      0.01 ±173%  +10280.0%       1.30 ±  5%  perf-profile.children.cycles-pp.__es_tree_search
      0.04 ±102%   +5485.7%       1.96 ±  6%  perf-profile.children.cycles-pp.__ext4_journal_start_sb
      0.00 ± -1%      +Inf%       3.36 ±  2%  perf-profile.children.cycles-pp.__find_get_block
     35.73 ±  7%     -95.0%       1.80 ±  3%  perf-profile.children.cycles-pp.__find_get_block_slow
      0.33 ± 11%   +8997.0%      30.02 ±  3%  perf-profile.children.cycles-pp.__generic_file_write_iter
      0.00 ± -1%      +Inf%       3.58 ±  3%  perf-profile.children.cycles-pp.__getblk_gfp
      5.86 ±  8%     -89.9%       0.59 ±  4%  perf-profile.children.cycles-pp.__might_sleep
      0.00 ± -1%      +Inf%       1.57 ± 10%  perf-profile.children.cycles-pp.__page_cache_alloc
     15.94 ±  6%     -72.4%       4.41 ±  1%  perf-profile.children.cycles-pp.__radix_tree_lookup
      0.00 ± -1%      +Inf%       3.70 ±  3%  perf-profile.children.cycles-pp.__read_extent_tree_block
      0.00 ± -1%      +Inf%       2.91 ±  3%  perf-profile.children.cycles-pp.__remove_mapping
      0.00 ± -1%      +Inf%       1.28 ±  2%  perf-profile.children.cycles-pp.__set_page_dirty
      0.00 ± -1%      +Inf%       1.11 ±  9%  perf-profile.children.cycles-pp.__test_set_page_writeback
      0.87 ± 13%     -75.4%       0.21 ±  7%  perf-profile.children.cycles-pp.__tick_nohz_idle_enter
      0.42 ±  7%   +7123.1%      30.52 ±  3%  perf-profile.children.cycles-pp.__vfs_write
     51.11 ±  7%     -65.2%      17.80 ±  2%  perf-profile.children.cycles-pp.__writeback_inodes_wb
     51.11 ±  7%     -65.2%      17.80 ±  2%  perf-profile.children.cycles-pp.__writeback_single_inode
      0.09 ± 20%   +1173.0%       1.18 ±  9%  perf-profile.children.cycles-pp._raw_spin_lock_irqsave
      0.00 ± -1%      +Inf%       1.77 ±  5%  perf-profile.children.cycles-pp.add_to_page_cache_lru
      0.00 ± -1%      +Inf%       1.55 ± 10%  perf-profile.children.cycles-pp.alloc_pages_current
      2.98 ± 15%     -69.8%       0.90 ± 10%  perf-profile.children.cycles-pp.apic_timer_interrupt
      0.10 ±  8%   +2990.2%       3.17 ±  2%  perf-profile.children.cycles-pp.bio_endio
      0.03 ±102%   +7750.0%       2.36 ±  1%  perf-profile.children.cycles-pp.block_write_end
     47.10 ±  8%     -10.3%      42.25 ±  3%  perf-profile.children.cycles-pp.call_cpuidle
      0.04 ± 58%   +5393.7%       2.20 ±  2%  perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
     47.72 ±  7%     -10.5%      42.72 ±  3%  perf-profile.children.cycles-pp.cpu_startup_entry
     47.06 ±  8%     -10.2%      42.25 ±  3%  perf-profile.children.cycles-pp.cpuidle_enter
     51.11 ±  7%     -65.2%      17.80 ±  2%  perf-profile.children.cycles-pp.do_writepages
      0.01 ±173%  +10920.0%       1.38 ±  4%  perf-profile.children.cycles-pp.end_page_writeback
      0.73 ±  4%   +4272.0%      32.02 ±  3%  perf-profile.children.cycles-pp.entry_SYSCALL_64_fastpath
      0.06 ± 13%  +16332.0%      10.27 ±  2%  perf-profile.children.cycles-pp.ext4_bio_write_page
      0.03 ±100%  +54100.0%      14.90 ±  3%  perf-profile.children.cycles-pp.ext4_da_get_block_prep
      0.18 ±  8%  +12661.6%      23.29 ±  4%  perf-profile.children.cycles-pp.ext4_da_write_begin
      0.07 ± 19%   +5259.3%       3.62 ±  1%  perf-profile.children.cycles-pp.ext4_da_write_end
      0.06 ± 14%   +3352.2%       1.98 ±  4%  perf-profile.children.cycles-pp.ext4_end_bio
      0.09 ± 18%   +4988.6%       4.45 ±  5%  perf-profile.children.cycles-pp.ext4_es_insert_extent
      0.03 ±100%  +16281.8%       4.50 ±  4%  perf-profile.children.cycles-pp.ext4_es_lookup_extent
     49.67 ±  7%     -84.8%       7.57 ±  3%  perf-profile.children.cycles-pp.ext4_ext_map_blocks
      0.32 ± 11%   +9440.9%      30.29 ±  3%  perf-profile.children.cycles-pp.ext4_file_write_iter
      0.07 ± 12%   +8807.4%       6.01 ±  3%  perf-profile.children.cycles-pp.ext4_find_extent
      0.01 ±173%  +13720.0%       1.73 ±  4%  perf-profile.children.cycles-pp.ext4_finish_bio
      0.27 ±  6%   +3330.2%       9.09 ±  2%  perf-profile.children.cycles-pp.ext4_io_submit
     50.46 ±  7%     -95.1%       2.48 ±  2%  perf-profile.children.cycles-pp.ext4_map_blocks
      0.01 ±173%   +7016.7%       1.07 ± 12%  perf-profile.children.cycles-pp.ext4_put_io_end_defer
      0.00 ± -1%      +Inf%       1.18 ±  2%  perf-profile.children.cycles-pp.ext4_releasepage
     51.11 ±  7%     -65.3%      17.72 ±  2%  perf-profile.children.cycles-pp.ext4_writepages
     22.53 ±  6%     -86.4%       3.07 ±  3%  perf-profile.children.cycles-pp.find_get_entry
      0.08 ±  5%   +1047.1%       0.98 ±  2%  perf-profile.children.cycles-pp.find_get_pages_tag
      0.34 ±  1%   +2691.2%       9.56 ±  2%  perf-profile.children.cycles-pp.generic_make_request
      0.32 ±  8%   +9243.3%      29.67 ±  3%  perf-profile.children.cycles-pp.generic_perform_write
      0.05 ± 62%   +5657.9%       2.73 ±  2%  perf-profile.children.cycles-pp.generic_write_end
      0.00 ± -1%      +Inf%       1.22 ± 10%  perf-profile.children.cycles-pp.get_page_from_freelist
      0.09 ± 16%   +5774.3%       5.14 ±  7%  perf-profile.children.cycles-pp.grab_cache_page_write_begin
      1.17 ± 16%     -61.8%       0.45 ±  8%  perf-profile.children.cycles-pp.hrtimer_interrupt
      1.33 ± 16%     -69.0%       0.41 ± 17%  perf-profile.children.cycles-pp.irq_exit
      0.01 ±173%  +10616.7%       1.61 ±  5%  perf-profile.children.cycles-pp.jbd2__journal_start
      0.00 ± -1%      +Inf%       1.12 ±  2%  perf-profile.children.cycles-pp.jbd2_journal_try_to_free_buffers
      0.00 ± -1%      +Inf%       0.94 ± 15%  perf-profile.children.cycles-pp.kmem_cache_alloc
      0.00 ± -1%      +Inf%       6.87 ±  1%  perf-profile.children.cycles-pp.kswapd
     51.25 ±  7%     -51.4%      24.89 ±  1%  perf-profile.children.cycles-pp.kthread
      1.22 ± 16%     -62.0%       0.46 ±  8%  perf-profile.children.cycles-pp.local_apic_timer_interrupt
      0.00 ± -1%      +Inf%       1.66 ±  2%  perf-profile.children.cycles-pp.mark_buffer_dirty
      0.20 ± 12%    +787.7%       1.80 ±  1%  perf-profile.children.cycles-pp.mpage_map_and_submit_buffers
      0.12 ±  4%   +9394.0%      11.87 ±  2%  perf-profile.children.cycles-pp.mpage_prepare_extent_to_map
      0.00 ± -1%      +Inf%      10.51 ±  3%  perf-profile.children.cycles-pp.mpage_process_page_bufs
      0.07 ± 14%  +14430.0%      10.90 ±  2%  perf-profile.children.cycles-pp.mpage_submit_page
     28.44 ±  7%     -77.0%       6.55 ±  5%  perf-profile.children.cycles-pp.pagecache_get_page
      0.09 ±  9%   +1020.0%       0.98 ±  3%  perf-profile.children.cycles-pp.pagevec_lookup_tag
      0.21 ±  6%   +2634.9%       5.88 ±  2%  perf-profile.children.cycles-pp.pmem_do_bvec
      0.32 ±  2%   +2774.2%       9.20 ±  2%  perf-profile.children.cycles-pp.pmem_make_request
     51.22 ±  7%     -65.1%      17.86 ±  2%  perf-profile.children.cycles-pp.process_one_work
     17.93 ±  6%     -86.3%       2.45 ±  3%  perf-profile.children.cycles-pp.radix_tree_lookup_slot
     51.25 ±  7%     -51.4%      24.89 ±  1%  perf-profile.children.cycles-pp.ret_from_fork
      0.00 ± -1%      +Inf%       5.96 ±  1%  perf-profile.children.cycles-pp.shrink_inactive_list
      0.00 ± -1%      +Inf%       6.86 ±  1%  perf-profile.children.cycles-pp.shrink_node
      0.00 ± -1%      +Inf%       5.98 ±  1%  perf-profile.children.cycles-pp.shrink_node_memcg
      0.00 ± -1%      +Inf%       5.36 ±  1%  perf-profile.children.cycles-pp.shrink_page_list
      2.92 ± 15%     -70.2%       0.87 ± 10%  perf-profile.children.cycles-pp.smp_apic_timer_interrupt
      0.34 ±  1%   +2701.5%       9.59 ±  2%  perf-profile.children.cycles-pp.submit_bio
      0.45 ±  8%   +6782.4%      31.31 ±  3%  perf-profile.children.cycles-pp.sys_write
      0.00 ± -1%      +Inf%       1.15 ±  4%  perf-profile.children.cycles-pp.test_clear_page_writeback
      0.87 ± 13%     -80.8%       0.17 ± 11%  perf-profile.children.cycles-pp.tick_nohz_irq_exit
      0.00 ± -1%      +Inf%       1.21 ±  1%  perf-profile.children.cycles-pp.try_to_release_page
     46.07 ±  7%     -99.9%       0.06 ± 11%  perf-profile.children.cycles-pp.unmap_underlying_metadata
      0.45 ±  8%   +6824.4%      31.16 ±  3%  perf-profile.children.cycles-pp.vfs_write
     51.11 ±  7%     -65.2%      17.80 ±  2%  perf-profile.children.cycles-pp.wb_workfn
     51.11 ±  7%     -65.2%      17.80 ±  2%  perf-profile.children.cycles-pp.wb_writeback
     51.23 ±  7%     -65.1%      17.86 ±  2%  perf-profile.children.cycles-pp.worker_thread
     51.11 ±  7%     -65.2%      17.80 ±  2%  perf-profile.children.cycles-pp.writeback_sb_inodes
      3.93 ±  9%     -88.6%       0.45 ±  6%  perf-profile.self.cycles-pp.___might_sleep
      0.21 ±  5%   +2617.6%       5.78 ±  2%  perf-profile.self.cycles-pp.__copy_user_nocache
      0.00 ± -1%      +Inf%       1.31 ±  5%  perf-profile.self.cycles-pp.__es_insert_extent
      0.01 ±173%   +9320.0%       1.18 ±  5%  perf-profile.self.cycles-pp.__es_tree_search
      6.65 ±  7%     -95.7%       0.29 ±  9%  perf-profile.self.cycles-pp.__find_get_block_slow
      3.26 ±  7%     -91.9%       0.26 ±  8%  perf-profile.self.cycles-pp.__might_sleep
     15.94 ±  6%     -72.4%       4.41 ±  1%  perf-profile.self.cycles-pp.__radix_tree_lookup
      0.04 ± 58%   +5393.7%       2.20 ±  2%  perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
      0.03 ±100%  +15418.2%       4.27 ±  4%  perf-profile.self.cycles-pp.ext4_es_lookup_extent
      1.24 ± 14%     -87.1%       0.16 ±  7%  perf-profile.self.cycles-pp.ext4_ext_map_blocks
      0.00 ± -1%      +Inf%       1.99 ±  4%  perf-profile.self.cycles-pp.ext4_find_extent
      0.01 ±173%   +7016.7%       1.07 ± 12%  perf-profile.self.cycles-pp.ext4_put_io_end_defer
      4.63 ±  7%     -86.5%       0.62 ±  7%  perf-profile.self.cycles-pp.find_get_entry
      6.50 ±  9%     -98.3%       0.11 ±  7%  perf-profile.self.cycles-pp.pagecache_get_page
      2.59 ±  8%    -100.0%       0.00 ± -1%  perf-profile.self.cycles-pp.radix_tree_lookup_slot
      4.53 ±  7%    -100.0%       0.00 ± -1%  perf-profile.self.cycles-pp.unmap_underlying_metadata



***************************************************************************************************
lkp-bdw-de1: 16 threads Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz with 8G memory
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase:
  4k/gcc-6/performance/1SSD/ext4/sync/x86_64-rhel-7.2/64/debian-x86_64-2016-08-31.cgz/300s/randwrite/lkp-bdw-de1/400g/fio-basic

commit:
  6f2b562c3a ("direct-io: Use clean_bdev_aliases() instead of handmade iteration")
  adad5aa544 ("ext4: Use clean_bdev_aliases() instead of iteration")

6f2b562c3a89f4a6 adad5aa544e281d84f837b2786
---------------- --------------------------
         %stddev     %change         %stddev
             \          |                \  
     21.99 ±  3%     +88.0%      41.34 ±  0%  fio.write_bw_MBps
      5630 ±  3%     +88.0%      10584 ±  0%  fio.write_iops
      0.01 ±  0%   +2550.0%       0.27 ± 10%  fio.latency_100ms%
      1.82 ± 21%     +31.9%       2.40 ±  2%  fio.latency_20us%
      5.41 ±  3%     -44.0%       3.03 ±  1%  fio.latency_250ms%
      0.09 ± 24%     -50.0%       0.04 ± 38%  fio.latency_2us%
      0.01 ±  0%    +800.0%       0.09 ± 39%  fio.latency_50ms%
      0.10 ±  5%     +36.8%       0.13 ± 12%  fio.latency_50us%
      0.01 ±  0%    +100.0%       0.02 ±  0%  fio.latency_750us%
      0.01 ±  0%    -100.0%       0.00 ± -1%  fio.latency_>=2000ms%
  13520302 ±  3%     +87.9%   25407598 ±  0%  fio.time.file_system_outputs
    572.00 ± 11%    +106.8%       1182 ±  9%  fio.time.involuntary_context_switches
      4.00 ±  0%     +75.0%       7.00 ±  0%  fio.time.percent_of_cpu_this_job_got
     10.81 ±  3%     +64.4%      17.77 ±  1%  fio.time.system_time
    124822 ±  0%     +14.3%     142634 ±  0%  fio.time.voluntary_context_switches
    207872 ±  0%    -100.0%      10.50 ±  4%  fio.write_clat_95%_us
     11370 ±  3%     -46.9%       6043 ±  0%  fio.write_clat_mean_us
     48714 ±  1%     -31.2%      33513 ±  0%  fio.write_clat_stddev
    779996 ±  4%    +116.4%    1688045 ±  0%  softirqs.BLOCK
    236088 ±  1%     -40.2%     141136 ±  2%  softirqs.TIMER
 1.273e+08 ± 32%    +125.7%  2.873e+08 ±  7%  cpuidle.C1-BDW.time
   1259421 ±  8%    +150.7%    3157306 ±  1%  cpuidle.C1-BDW.usage
  35610319 ±  8%     +48.0%   52707031 ±  7%  cpuidle.POLL.time
     25450 ±  4%    +159.0%      65928 ±  0%  cpuidle.POLL.usage
      6.06 ±  1%     -43.9%       3.39 ±  0%  iostat.sda.avgqu-sz
      5181 ±  4%    +116.2%      11205 ±  0%  iostat.sda.w/s
     48263 ±  2%     +82.4%      88033 ±  0%  iostat.sda.wkB/s
    281.65 ±  9%    +129.5%     646.34 ±  0%  iostat.sda.wrqm/s
      7.05 ±  2%     -41.8%       4.10 ±  2%  turbostat.%Busy
    175.25 ±  2%     -43.7%      98.75 ±  2%  turbostat.Avg_MHz
      0.83 ± 58%    +102.7%       1.68 ± 22%  turbostat.CPU%c3
     24.78 ±  0%      -6.2%      23.24 ±  0%  turbostat.PkgWatt
     20259 ±  4%    +105.3%      41593 ±  0%  vmstat.io.bo
     40089 ±  3%    +146.8%      98930 ±  0%  vmstat.memory.buff
   4840450 ±  1%     +44.3%    6985650 ±  0%  vmstat.memory.cache
   2944417 ±  3%     -75.0%     735714 ±  0%  vmstat.memory.free
      9190 ±  5%    +131.5%      21271 ±  0%  vmstat.system.cs
     21597 ±  0%     +27.5%      27530 ±  0%  vmstat.system.in
    206807 ±  0%     +28.8%     266435 ±  0%  meminfo.Active
     91674 ±  1%     +63.7%     150046 ±  0%  meminfo.Active(file)
     39950 ±  3%    +147.0%      98691 ±  0%  meminfo.Buffers
   4301589 ±  1%     +42.2%    6117099 ±  0%  meminfo.Cached
    147136 ±  4%     -51.8%      70930 ±  3%  meminfo.CmaFree
   1231530 ±  0%      -9.8%    1110281 ±  0%  meminfo.Dirty
   4165928 ±  1%     +33.2%    5546973 ±  0%  meminfo.Inactive
   4035603 ±  1%     +34.2%    5416550 ±  0%  meminfo.Inactive(file)
   2954425 ±  3%     -75.0%     738296 ±  0%  meminfo.MemFree
    528634 ±  2%     +63.7%     865637 ±  0%  meminfo.SReclaimable
    572788 ±  1%     +58.8%     909788 ±  0%  meminfo.Slab
     82578 ± 22%    +527.1%     517878 ±  0%  meminfo.Unevictable
 3.364e+11 ±  6%     -83.3%  5.634e+10 ±  3%  perf-stat.branch-instructions
      0.08 ±  7%   +1007.3%       0.91 ±  2%  perf-stat.branch-miss-rate%
 2.762e+08 ±  5%     +86.2%  5.142e+08 ±  4%  perf-stat.branch-misses
 1.991e+09 ±  7%     +56.6%  3.118e+09 ±  5%  perf-stat.cache-misses
 1.991e+09 ±  7%     +56.6%  3.118e+09 ±  5%  perf-stat.cache-references
   2785024 ±  5%    +131.2%    6439786 ±  0%  perf-stat.context-switches
 7.542e+11 ±  4%     -46.1%  4.062e+11 ±  3%  perf-stat.cpu-cycles
     12762 ±  8%     +64.5%      20989 ± 10%  perf-stat.cpu-migrations
      0.01 ±  4%    +972.7%       0.07 ±  4%  perf-stat.dTLB-load-miss-rate%
  29564776 ± 14%     +86.4%   55104164 ± 10%  perf-stat.dTLB-load-misses
 4.544e+11 ± 10%     -82.6%  7.909e+10 ±  8%  perf-stat.dTLB-loads
      0.00 ± 11%   +1059.4%       0.01 ±  2%  perf-stat.dTLB-store-miss-rate%
   2671103 ±  5%     +34.6%    3595369 ±  3%  perf-stat.dTLB-store-misses
 3.157e+11 ±  6%     -88.5%  3.635e+10 ±  0%  perf-stat.dTLB-stores
      9.80 ±  0%     -23.4%       7.51 ±  4%  perf-stat.iTLB-load-miss-rate%
   7536032 ±  5%     +15.5%    8705667 ±  4%  perf-stat.iTLB-load-misses
  69358427 ±  4%     +54.6%  1.072e+08 ±  2%  perf-stat.iTLB-loads
 1.841e+12 ±  6%     -84.9%  2.778e+11 ±  3%  perf-stat.instructions
    245649 ± 10%     -87.0%      32014 ±  7%  perf-stat.instructions-per-iTLB-miss
      2.44 ±  2%     -72.0%       0.68 ±  1%  perf-stat.ipc
     13430 ±  3%      -8.6%      12278 ±  4%  slabinfo.anon_vma.active_objs
     13430 ±  3%      -8.6%      12278 ±  4%  slabinfo.anon_vma.num_objs
    829.00 ±  3%     +25.5%       1040 ±  1%  slabinfo.blkdev_requests.active_objs
    829.00 ±  3%     +31.4%       1089 ±  1%  slabinfo.blkdev_requests.num_objs
    868726 ±  2%     +54.0%    1338136 ±  0%  slabinfo.buffer_head.active_objs
     22278 ±  2%     +54.1%      34328 ±  0%  slabinfo.buffer_head.active_slabs
    868881 ±  2%     +54.1%    1338829 ±  0%  slabinfo.buffer_head.num_objs
     22278 ±  2%     +54.1%      34328 ±  0%  slabinfo.buffer_head.num_slabs
    240.50 ±  9%     +72.8%     415.50 ±  7%  slabinfo.ext4_allocation_context.active_objs
    240.50 ±  9%     +72.8%     415.50 ±  7%  slabinfo.ext4_allocation_context.num_objs
   1008816 ±  3%    +102.0%    2037569 ±  0%  slabinfo.ext4_extent_status.active_objs
     10145 ±  3%     +99.7%      20257 ±  0%  slabinfo.ext4_extent_status.active_slabs
   1034848 ±  3%     +99.7%    2066293 ±  0%  slabinfo.ext4_extent_status.num_objs
     10145 ±  3%     +99.7%      20257 ±  0%  slabinfo.ext4_extent_status.num_slabs
    598.50 ±  7%    +204.2%       1820 ± 16%  slabinfo.ext4_io_end.active_objs
    598.50 ±  7%    +204.2%       1820 ± 16%  slabinfo.ext4_io_end.num_objs
      7552 ±  3%    +139.9%      18119 ±  1%  slabinfo.jbd2_journal_head.active_objs
    225.50 ±  3%    +145.5%     553.50 ±  1%  slabinfo.jbd2_journal_head.active_slabs
      7688 ±  3%    +145.0%      18836 ±  1%  slabinfo.jbd2_journal_head.num_objs
    225.50 ±  3%    +145.5%     553.50 ±  1%  slabinfo.jbd2_journal_head.num_slabs
      1705 ±  2%     +10.9%       1892 ±  1%  slabinfo.kmalloc-128.active_objs
      1705 ±  2%     +10.9%       1892 ±  1%  slabinfo.kmalloc-128.num_objs
    635923 ±  1%     +68.0%    1068344 ±  0%  slabinfo.radix_tree_node.active_objs
     22711 ±  1%     +68.1%      38179 ±  0%  slabinfo.radix_tree_node.active_slabs
    635924 ±  1%     +68.1%    1069036 ±  0%  slabinfo.radix_tree_node.num_objs
     22711 ±  1%     +68.1%      38179 ±  0%  slabinfo.radix_tree_node.num_slabs
     43.25 ± 30%    +659.5%     328.50 ±  5%  proc-vmstat.kswapd_high_wmark_hit_quickly
     22911 ±  1%     +63.7%      37511 ±  0%  proc-vmstat.nr_active_file
   1719405 ±  3%     +88.3%    3238051 ±  0%  proc-vmstat.nr_dirtied
    307915 ±  0%      -9.9%     277568 ±  0%  proc-vmstat.nr_dirty
    173320 ±  0%     -11.2%     153959 ±  0%  proc-vmstat.nr_dirty_background_threshold
    347064 ±  0%     -11.2%     308295 ±  0%  proc-vmstat.nr_dirty_threshold
   1084951 ±  1%     +43.2%    1553939 ±  0%  proc-vmstat.nr_file_pages
     36802 ±  4%     -51.8%      17727 ±  3%  proc-vmstat.nr_free_cma
    739051 ±  3%     -75.0%     184500 ±  0%  proc-vmstat.nr_free_pages
   1008614 ±  1%     +34.3%    1354148 ±  0%  proc-vmstat.nr_inactive_file
    132095 ±  2%     +63.8%     216411 ±  0%  proc-vmstat.nr_slab_reclaimable
     20520 ± 22%    +530.9%     129469 ±  0%  proc-vmstat.nr_unevictable
   1455387 ±  4%    +104.6%    2977054 ±  0%  proc-vmstat.nr_written
     22911 ±  1%     +63.7%      37512 ±  0%  proc-vmstat.nr_zone_active_file
   1008623 ±  1%     +34.3%    1354156 ±  0%  proc-vmstat.nr_zone_inactive_file
     20520 ± 22%    +530.9%     129469 ±  0%  proc-vmstat.nr_zone_unevictable
    307921 ±  0%      -9.9%     277574 ±  0%  proc-vmstat.nr_zone_write_pending
   2276380 ±  2%     +82.3%    4149633 ±  0%  proc-vmstat.numa_hit
   2276380 ±  2%     +82.3%    4149633 ±  0%  proc-vmstat.numa_local
      2290 ±  8%     -19.0%       1854 ±  3%  proc-vmstat.pgactivate
    586459 ±  0%     +95.4%    1146061 ±  0%  proc-vmstat.pgalloc_dma32
   1813324 ±  3%     +75.7%    3185203 ±  0%  proc-vmstat.pgalloc_normal
    686981 ± 11%    +280.1%    2611518 ±  0%  proc-vmstat.pgfree
   6092313 ±  4%    +106.2%   12563054 ±  0%  proc-vmstat.pgpgout
     11203 ± 36%    +170.9%      30354 ±  8%  proc-vmstat.pgrotated
    449606 ± 16%    +339.3%    1974929 ±  0%  proc-vmstat.pgscan_kswapd
    233963 ± 24%    +654.4%    1764934 ±  0%  proc-vmstat.pgsteal_kswapd
   1198400 ± 16%    +425.5%    6298080 ±  0%  proc-vmstat.slabs_scanned
      0.00 ±  0%      +Inf%      71874 ±  1%  proc-vmstat.workingset_nodereclaim
      8388 ±  1%     -71.6%       2385 ±  0%  sched_debug.cfs_rq:/.exec_clock.avg
    121674 ±  1%     -83.1%      20594 ±  1%  sched_debug.cfs_rq:/.exec_clock.max
    488.99 ±  7%     +83.9%     899.07 ±  2%  sched_debug.cfs_rq:/.exec_clock.min
     29252 ±  1%     -83.9%       4705 ±  2%  sched_debug.cfs_rq:/.exec_clock.stddev
     14195 ±  1%     -45.0%       7807 ±  8%  sched_debug.cfs_rq:/.min_vruntime.avg
    129717 ±  1%     -79.5%      26584 ±  3%  sched_debug.cfs_rq:/.min_vruntime.max
     29978 ±  1%     -82.1%       5358 ±  4%  sched_debug.cfs_rq:/.min_vruntime.stddev
     42.38 ± 13%     -64.7%      14.96 ± 32%  sched_debug.cfs_rq:/.runnable_load_avg.avg
    587.46 ± 10%     -73.2%     157.17 ± 58%  sched_debug.cfs_rq:/.runnable_load_avg.max
    142.60 ± 10%     -72.0%      39.89 ± 54%  sched_debug.cfs_rq:/.runnable_load_avg.stddev
   -115054 ± -1%     -84.2%     -18209 ± -2%  sched_debug.cfs_rq:/.spread0.avg
   -124959 ± -1%     -82.4%     -21931 ± -2%  sched_debug.cfs_rq:/.spread0.min
     29980 ±  1%     -82.1%       5359 ±  4%  sched_debug.cfs_rq:/.spread0.stddev
    183.46 ±  4%     -12.6%     160.38 ±  3%  sched_debug.cfs_rq:/.util_avg.avg
    758.46 ±  6%     -54.2%     347.67 ± 21%  sched_debug.cfs_rq:/.util_avg.max
    159.02 ±  6%     -59.6%      64.19 ± 27%  sched_debug.cfs_rq:/.util_avg.stddev
      4.62 ± 14%     +25.0%       5.78 ±  8%  sched_debug.cpu.clock.stddev
      4.62 ± 14%     +25.0%       5.78 ±  8%  sched_debug.cpu.clock_task.stddev
     32.76 ± 20%     -71.9%       9.22 ± 28%  sched_debug.cpu.cpu_load[0].avg
    476.58 ± 20%     -82.2%      85.00 ± 41%  sched_debug.cpu.cpu_load[0].max
    115.51 ± 20%     -80.9%      22.05 ± 38%  sched_debug.cpu.cpu_load[0].stddev
     50.30 ± 18%     -35.9%      32.26 ± 22%  sched_debug.cpu.cpu_load[1].avg
    689.00 ± 14%     -47.2%     363.67 ± 40%  sched_debug.cpu.cpu_load[1].max
    167.01 ± 14%     -45.9%      90.39 ± 37%  sched_debug.cpu.cpu_load[1].stddev
     49.20 ± 17%     -36.6%      31.17 ± 12%  sched_debug.cpu.cpu_load[2].avg
    685.33 ± 12%     -48.1%     355.79 ± 25%  sched_debug.cpu.cpu_load[2].max
    165.88 ± 13%     -47.3%      87.49 ± 22%  sched_debug.cpu.cpu_load[2].stddev
     48.09 ± 14%     -39.4%      29.16 ± 13%  sched_debug.cpu.cpu_load[3].avg
    680.83 ± 10%     -50.2%     339.04 ± 21%  sched_debug.cpu.cpu_load[3].max
    164.53 ± 10%     -49.7%      82.78 ± 20%  sched_debug.cpu.cpu_load[3].stddev
     48.58 ± 10%     -41.4%      28.46 ± 14%  sched_debug.cpu.cpu_load[4].avg
    688.96 ±  7%     -51.1%     336.62 ± 20%  sched_debug.cpu.cpu_load[4].max
    166.58 ±  7%     -50.9%      81.78 ± 19%  sched_debug.cpu.cpu_load[4].stddev
     98158 ±  2%     -17.7%      80832 ±  9%  sched_debug.cpu.load.avg
     11358 ±  5%     +18.9%      13505 ±  1%  sched_debug.cpu.nr_load_updates.min
      0.49 ± 22%     -20.4%       0.39 ±  4%  sched_debug.cpu.nr_running.stddev
     68686 ±  5%    +225.6%     223616 ±  0%  sched_debug.cpu.nr_switches.avg
    716941 ±  6%    +334.4%    3114465 ±  0%  sched_debug.cpu.nr_switches.max
     15378 ±  9%     +49.4%      22975 ±  1%  sched_debug.cpu.nr_switches.min
    167664 ±  7%    +345.2%     746520 ±  0%  sched_debug.cpu.nr_switches.stddev
     27.08 ± 11%     +91.2%      51.79 ± 28%  sched_debug.cpu.nr_uninterruptible.max
    -70.88 ±-30%    +137.4%    -168.25 ±-28%  sched_debug.cpu.nr_uninterruptible.min
     21.97 ± 21%    +155.2%      56.05 ± 33%  sched_debug.cpu.nr_uninterruptible.stddev
     67027 ±  5%    +230.8%     221705 ±  0%  sched_debug.cpu.sched_count.avg
    709329 ±  6%    +338.0%    3106542 ±  0%  sched_debug.cpu.sched_count.max
     14191 ± 10%     +52.9%      21695 ±  2%  sched_debug.cpu.sched_count.min
    166147 ±  7%    +348.3%     744893 ±  0%  sched_debug.cpu.sched_count.stddev
     31385 ±  5%    +227.5%     102792 ±  0%  sched_debug.cpu.sched_goidle.avg
    332305 ±  6%    +333.7%    1441132 ±  0%  sched_debug.cpu.sched_goidle.max
      6750 ± 11%     +54.1%      10404 ±  1%  sched_debug.cpu.sched_goidle.min
     77791 ±  7%    +344.2%     345568 ±  0%  sched_debug.cpu.sched_goidle.stddev
     34470 ±  5%    +242.0%     117883 ±  0%  sched_debug.cpu.ttwu_count.avg
    382534 ±  6%    +338.8%    1678433 ±  0%  sched_debug.cpu.ttwu_count.max
      6678 ± 11%     +48.9%       9940 ±  3%  sched_debug.cpu.ttwu_count.min
     89977 ±  7%    +347.8%     402944 ±  0%  sched_debug.cpu.ttwu_count.stddev
     29594 ±  5%    +275.3%     111065 ±  0%  sched_debug.cpu.ttwu_local.avg
    375624 ±  7%    +342.3%    1661530 ±  0%  sched_debug.cpu.ttwu_local.max
      4098 ±  7%     +46.2%       5994 ±  2%  sched_debug.cpu.ttwu_local.min
     89363 ±  7%    +348.0%     400331 ±  0%  sched_debug.cpu.ttwu_local.stddev
      1.71 ± 25%     +28.4%       2.20 ±  7%  sched_debug.rt_rq:/.rt_time.max
      0.44 ± 18%     +21.7%       0.53 ±  7%  sched_debug.rt_rq:/.rt_time.stddev
      2.14 ± 21%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.___might_sleep.__might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
      1.02 ± 29%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.___might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
      0.00 ± -1%      +Inf%       1.14 ± 10%  perf-profile.calltrace.cycles-pp.__blk_run_queue.blk_delay_work.process_one_work.worker_thread.kthread
      0.13 ±173%    +649.0%       0.96 ± 25%  perf-profile.calltrace.cycles-pp.__blk_run_queue.blk_run_queue.scsi_run_queue.scsi_end_request.scsi_io_completion
      0.45 ±100%    +266.9%       1.66 ± 11%  perf-profile.calltrace.cycles-pp.__block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
      0.45 ±100%    +265.2%       1.65 ± 11%  perf-profile.calltrace.cycles-pp.__block_write_begin_int.__block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
     29.24 ± 29%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
      1.71 ± 56%    +164.8%       4.52 ±  4%  perf-profile.calltrace.cycles-pp.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.sys_write
      0.35 ±104%    +384.5%       1.72 ± 28%  perf-profile.calltrace.cycles-pp.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event.handle_fasteoi_irq.handle_irq
      0.00 ± -1%      +Inf%       0.87 ± 21%  perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
      1.15 ± 21%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.__might_sleep.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages
      3.90 ± 22%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.__might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
     12.80 ± 29%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.__radix_tree_lookup.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.__find_get_block_slow
      0.66 ± 59%    +211.3%       2.06 ± 27%  perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.do_IRQ.ret_from_intr.cpuidle_enter
      0.42 ± 58%    +185.0%       1.19 ±  8%  perf-profile.calltrace.cycles-pp.__tick_nohz_idle_enter.tick_nohz_irq_exit.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
      1.73 ± 55%    +161.6%       4.53 ±  4%  perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
     46.87 ± 22%     -78.5%      10.06 ± 23%  perf-profile.calltrace.cycles-pp.__writeback_inodes_wb.wb_writeback.wb_workfn.process_one_work.worker_thread
     46.87 ± 22%     -78.5%      10.06 ± 23%  perf-profile.calltrace.cycles-pp.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_workfn
      1.14 ± 30%     +80.0%       2.05 ±  6%  perf-profile.calltrace.cycles-pp._raw_spin_lock.tick_do_update_jiffies64.tick_irq_enter.irq_enter.smp_apic_timer_interrupt
      0.13 ±173%    +841.2%       1.20 ± 29%  perf-profile.calltrace.cycles-pp.ahci_handle_port_intr.ahci_single_level_irq_intr.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event
      0.34 ±104%    +398.5%       1.71 ± 27%  perf-profile.calltrace.cycles-pp.ahci_single_level_irq_intr.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event.handle_fasteoi_irq
      3.24 ± 20%    +104.9%       6.64 ±  8%  perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
      0.00 ± -1%      +Inf%       1.16 ± 10%  perf-profile.calltrace.cycles-pp.blk_delay_work.process_one_work.worker_thread.kthread.ret_from_fork
      0.66 ± 59%    +202.6%       2.00 ± 26%  perf-profile.calltrace.cycles-pp.blk_done_softirq.__softirqentry_text_start.irq_exit.do_IRQ.ret_from_intr
      0.13 ±173%    +645.3%       0.99 ± 25%  perf-profile.calltrace.cycles-pp.blk_run_queue.scsi_run_queue.scsi_end_request.scsi_io_completion.scsi_finish_command
      0.36 ±105%    +372.6%       1.73 ± 38%  perf-profile.calltrace.cycles-pp.call_console_drivers.console_unlock.vprintk_emit.vprintk_default.printk
      2.92 ± 40%    +296.0%      11.55 ± 19%  perf-profile.calltrace.cycles-pp.call_cpuidle.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations
     44.69 ± 17%     +40.6%      62.85 ±  8%  perf-profile.calltrace.cycles-pp.call_cpuidle.cpu_startup_entry.start_secondary
      0.36 ±105%    +374.7%       1.73 ± 39%  perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.vprintk_default.printk.perf_duration_warn
      3.41 ± 39%    +287.3%      13.21 ± 20%  perf-profile.calltrace.cycles-pp.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
     45.17 ± 17%     +41.1%      63.73 ±  8%  perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary
      2.92 ± 40%    +295.3%      11.52 ± 19%  perf-profile.calltrace.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.rest_init.start_kernel
     44.67 ± 17%     +40.7%      62.83 ±  8%  perf-profile.calltrace.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
      1.42 ± 40%    +380.2%       6.81 ± 17%  perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.rest_init
     40.70 ± 17%     +33.2%      54.22 ±  9%  perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
      1.39 ± 41%    +205.1%       4.22 ± 24%  perf-profile.calltrace.cycles-pp.do_IRQ.ret_from_intr.cpuidle_enter.call_cpuidle.cpu_startup_entry
      0.77 ± 24%     +73.7%       1.34 ± 25%  perf-profile.calltrace.cycles-pp.do_wait.sys_wait4.entry_SYSCALL_64_fastpath
     46.87 ± 22%     -78.5%      10.06 ± 23%  perf-profile.calltrace.cycles-pp.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback
      2.72 ± 41%    +141.6%       6.57 ±  6%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_fastpath
      0.15 ±173%    +418.6%       0.76 ± 30%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_fastpath.read
      0.33 ±100%    +304.6%       1.32 ± 10%  perf-profile.calltrace.cycles-pp.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin.generic_perform_write
      1.01 ± 74%    +210.2%       3.12 ±  3%  perf-profile.calltrace.cycles-pp.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
      0.30 ±100%    +269.2%       1.11 ± 11%  perf-profile.calltrace.cycles-pp.ext4_es_lookup_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin
     42.95 ± 26%     -91.3%       3.74 ± 18%  perf-profile.calltrace.cycles-pp.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages.__writeback_single_inode
      1.72 ± 55%    +163.6%       4.53 ±  4%  perf-profile.calltrace.cycles-pp.ext4_file_write_iter.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
     43.95 ± 25%     -89.4%       4.66 ± 17%  perf-profile.calltrace.cycles-pp.ext4_map_blocks.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
      0.92 ± 61%    +103.5%       1.87 ± 18%  perf-profile.calltrace.cycles-pp.ext4_split_extent.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages
      0.82 ± 62%     +89.6%       1.55 ± 22%  perf-profile.calltrace.cycles-pp.ext4_split_extent_at.ext4_split_extent.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
     46.85 ± 22%     -78.5%      10.06 ± 23%  perf-profile.calltrace.cycles-pp.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb
      1.08 ± 23%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.find_get_entry.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
     17.78 ± 29%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks
      1.65 ± 58%    +165.7%       4.37 ±  3%  perf-profile.calltrace.cycles-pp.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write
      0.15 ±173%    +672.9%       1.14 ±  9%  perf-profile.calltrace.cycles-pp.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
      0.39 ±103%    +393.6%       1.93 ± 24%  perf-profile.calltrace.cycles-pp.handle_fasteoi_irq.handle_irq.do_IRQ.ret_from_intr.cpuidle_enter
      0.40 ±103%    +392.5%       1.96 ± 23%  perf-profile.calltrace.cycles-pp.handle_irq.do_IRQ.ret_from_intr.cpuidle_enter.call_cpuidle
      0.35 ±104%    +393.7%       1.75 ± 27%  perf-profile.calltrace.cycles-pp.handle_irq_event.handle_fasteoi_irq.handle_irq.do_IRQ.ret_from_intr
      0.35 ±104%    +393.7%       1.75 ± 27%  perf-profile.calltrace.cycles-pp.handle_irq_event_percpu.handle_irq_event.handle_fasteoi_irq.handle_irq.do_IRQ
      0.66 ± 19%    +134.0%       1.53 ± 25%  perf-profile.calltrace.cycles-pp.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter
      0.00 ± -1%      +Inf%       0.92 ± 39%  perf-profile.calltrace.cycles-pp.io_serial_in.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
      1.64 ± 26%     +81.1%       2.98 ± 11%  perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
      0.70 ± 60%    +208.2%       2.17 ± 25%  perf-profile.calltrace.cycles-pp.irq_exit.do_IRQ.ret_from_intr.cpuidle_enter.call_cpuidle
      0.79 ±  9%    +123.0%       1.77 ± 10%  perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
      0.36 ±105%    +374.7%       1.73 ± 39%  perf-profile.calltrace.cycles-pp.irq_work_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
      0.36 ±105%    +374.7%       1.73 ± 39%  perf-profile.calltrace.cycles-pp.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter.call_cpuidle
      0.36 ±105%    +374.7%       1.73 ± 39%  perf-profile.calltrace.cycles-pp.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter
      0.00 ± -1%      +Inf%       1.24 ± 22%  perf-profile.calltrace.cycles-pp.kswapd.kthread.ret_from_fork
     47.58 ± 21%     -70.6%      14.00 ± 19%  perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
      0.69 ± 20%    +140.4%       1.67 ± 23%  perf-profile.calltrace.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
      1.14 ± 30%     +80.0%       2.05 ±  6%  perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.tick_do_update_jiffies64.tick_irq_enter.irq_enter
     23.25 ± 30%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
      0.15 ±173%    +669.5%       1.14 ±  9%  perf-profile.calltrace.cycles-pp.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
      0.36 ±105%    +374.7%       1.73 ± 39%  perf-profile.calltrace.cycles-pp.perf_duration_warn.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt
      5.62 ± 25%    +212.8%      17.59 ± 22%  perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
      0.36 ±105%    +374.7%       1.73 ± 39%  perf-profile.calltrace.cycles-pp.printk.perf_duration_warn.irq_work_run_list.irq_work_run.smp_irq_work_interrupt
     47.34 ± 21%     -74.7%      11.96 ± 20%  perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
     14.58 ± 29%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata
      0.15 ±173%    +427.1%       0.78 ± 28%  perf-profile.calltrace.cycles-pp.read
      3.42 ± 38%    +287.3%      13.25 ± 20%  perf-profile.calltrace.cycles-pp.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
     47.58 ± 21%     -70.6%      14.00 ± 19%  perf-profile.calltrace.cycles-pp.ret_from_fork
      1.39 ± 41%    +205.4%       4.23 ± 24%  perf-profile.calltrace.cycles-pp.ret_from_intr.cpuidle_enter.call_cpuidle.cpu_startup_entry.rest_init
      0.59 ± 60%    +204.2%       1.81 ± 25%  perf-profile.calltrace.cycles-pp.scsi_end_request.scsi_io_completion.scsi_finish_command.scsi_softirq_done.blk_done_softirq
      0.62 ± 59%    +203.6%       1.88 ± 26%  perf-profile.calltrace.cycles-pp.scsi_finish_command.scsi_softirq_done.blk_done_softirq.__softirqentry_text_start.irq_exit
      0.60 ± 60%    +207.1%       1.84 ± 27%  perf-profile.calltrace.cycles-pp.scsi_io_completion.scsi_finish_command.scsi_softirq_done.blk_done_softirq.__softirqentry_text_start
      0.00 ± -1%      +Inf%       1.12 ± 10%  perf-profile.calltrace.cycles-pp.scsi_request_fn.__blk_run_queue.blk_delay_work.process_one_work.worker_thread
      0.13 ±173%    +641.2%       0.95 ± 24%  perf-profile.calltrace.cycles-pp.scsi_request_fn.__blk_run_queue.blk_run_queue.scsi_run_queue.scsi_end_request
      0.13 ±173%    +645.3%       0.99 ± 25%  perf-profile.calltrace.cycles-pp.scsi_run_queue.scsi_end_request.scsi_io_completion.scsi_finish_command.scsi_softirq_done
      0.63 ± 59%    +203.2%       1.92 ± 25%  perf-profile.calltrace.cycles-pp.scsi_softirq_done.blk_done_softirq.__softirqentry_text_start.irq_exit.do_IRQ
      0.20 ±173%    +687.2%       1.54 ± 38%  perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.univ8250_console_write.call_console_drivers
      0.33 ±105%    +385.6%       1.60 ± 38%  perf-profile.calltrace.cycles-pp.serial8250_console_write.univ8250_console_write.call_console_drivers.console_unlock.vprintk_emit
      0.00 ± -1%      +Inf%       0.84 ± 27%  perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.kswapd.kthread
      0.00 ± -1%      +Inf%       1.23 ± 22%  perf-profile.calltrace.cycles-pp.shrink_node.kswapd.kthread.ret_from_fork
      0.00 ± -1%      +Inf%       0.84 ± 27%  perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.kswapd.kthread.ret_from_fork
      0.00 ± -1%      +Inf%       0.80 ± 24%  perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.kswapd
      3.22 ± 20%    +104.0%       6.56 ±  7%  perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry
      0.36 ±105%    +374.7%       1.73 ± 39%  perf-profile.calltrace.cycles-pp.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry
      3.42 ± 38%    +287.3%      13.25 ± 20%  perf-profile.calltrace.cycles-pp.start_kernel.x86_64_start_reservations.x86_64_start_kernel
     45.20 ± 17%     +41.2%      63.81 ±  8%  perf-profile.calltrace.cycles-pp.start_secondary
      0.79 ± 22%     +78.9%       1.42 ± 23%  perf-profile.calltrace.cycles-pp.sys_wait4.entry_SYSCALL_64_fastpath
      1.78 ± 55%    +166.8%       4.74 ±  5%  perf-profile.calltrace.cycles-pp.sys_write.entry_SYSCALL_64_fastpath
      1.25 ± 29%     +77.5%       2.21 ±  9%  perf-profile.calltrace.cycles-pp.tick_do_update_jiffies64.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt
      1.55 ± 26%     +79.1%       2.78 ± 10%  perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter
      0.55 ±  8%    +124.1%       1.23 ±  9%  perf-profile.calltrace.cycles-pp.tick_nohz_irq_exit.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter
      0.14 ±173%    +621.4%       1.01 ± 10%  perf-profile.calltrace.cycles-pp.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.tick_nohz_irq_exit.irq_exit.smp_apic_timer_interrupt
      0.20 ±173%    +687.2%       1.54 ± 38%  perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.univ8250_console_write.call_console_drivers.console_unlock
      0.33 ±105%    +385.6%       1.60 ± 38%  perf-profile.calltrace.cycles-pp.univ8250_console_write.call_console_drivers.console_unlock.vprintk_emit.vprintk_default
     38.19 ± 28%    -100.0%       0.00 ± -1%  perf-profile.calltrace.cycles-pp.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages
      1.76 ± 55%    +166.3%       4.70 ±  4%  perf-profile.calltrace.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath
      0.36 ±105%    +374.7%       1.73 ± 39%  perf-profile.calltrace.cycles-pp.vprintk_default.printk.perf_duration_warn.irq_work_run_list.irq_work_run
      0.36 ±105%    +374.7%       1.73 ± 39%  perf-profile.calltrace.cycles-pp.vprintk_emit.vprintk_default.printk.perf_duration_warn.irq_work_run_list
      0.62 ± 22%     +75.2%       1.09 ± 22%  perf-profile.calltrace.cycles-pp.wait_consider_task.do_wait.sys_wait4.entry_SYSCALL_64_fastpath
      0.19 ±173%    +702.7%       1.50 ± 38%  perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.univ8250_console_write
     46.87 ± 22%     -78.5%      10.06 ± 23%  perf-profile.calltrace.cycles-pp.wb_workfn.process_one_work.worker_thread.kthread.ret_from_fork
     46.87 ± 22%     -78.5%      10.06 ± 23%  perf-profile.calltrace.cycles-pp.wb_writeback.wb_workfn.process_one_work.worker_thread.kthread
     47.44 ± 21%     -74.1%      12.27 ± 19%  perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
     46.87 ± 22%     -78.5%      10.06 ± 23%  perf-profile.calltrace.cycles-pp.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_workfn.process_one_work
      3.42 ± 38%    +287.3%      13.25 ± 20%  perf-profile.calltrace.cycles-pp.x86_64_start_kernel
      3.42 ± 38%    +287.3%      13.25 ± 20%  perf-profile.calltrace.cycles-pp.x86_64_start_reservations.x86_64_start_kernel
      3.26 ± 22%     -92.8%       0.24 ± 23%  perf-profile.children.cycles-pp.___might_sleep
      0.92 ± 35%    +151.2%       2.31 ± 20%  perf-profile.children.cycles-pp.__blk_run_queue
      0.56 ± 63%    +199.1%       1.66 ± 11%  perf-profile.children.cycles-pp.__block_write_begin
      0.56 ± 63%    +196.4%       1.65 ± 11%  perf-profile.children.cycles-pp.__block_write_begin_int
     29.84 ± 29%     -99.5%       0.14 ± 26%  perf-profile.children.cycles-pp.__find_get_block_slow
      1.71 ± 55%    +164.4%       4.52 ±  4%  perf-profile.children.cycles-pp.__generic_file_write_iter
      0.96 ± 37%     +90.1%       1.81 ± 25%  perf-profile.children.cycles-pp.__handle_irq_event_percpu
      0.42 ± 21%    +137.1%       0.99 ± 21%  perf-profile.children.cycles-pp.__hrtimer_run_queues
      5.19 ± 20%     -93.8%       0.32 ± 21%  perf-profile.children.cycles-pp.__might_sleep
     13.51 ± 28%     -93.0%       0.95 ±  9%  perf-profile.children.cycles-pp.__radix_tree_lookup
      0.32 ± 30%    +230.7%       1.05 ±  8%  perf-profile.children.cycles-pp.__schedule
      1.36 ± 27%    +113.4%       2.91 ± 20%  perf-profile.children.cycles-pp.__softirqentry_text_start
      0.67 ± 17%    +151.3%       1.69 ± 10%  perf-profile.children.cycles-pp.__tick_nohz_idle_enter
      0.35 ± 32%    +111.6%       0.73 ± 38%  perf-profile.children.cycles-pp.__vfs_read
      1.98 ± 50%    +160.2%       5.15 ±  6%  perf-profile.children.cycles-pp.__vfs_write
     46.87 ± 22%     -78.5%      10.06 ± 23%  perf-profile.children.cycles-pp.__writeback_inodes_wb
     46.87 ± 22%     -78.5%      10.06 ± 23%  perf-profile.children.cycles-pp.__writeback_single_inode
      1.61 ± 27%     +98.1%       3.19 ±  9%  perf-profile.children.cycles-pp._raw_spin_lock
      0.33 ± 43%    +132.8%       0.76 ± 25%  perf-profile.children.cycles-pp.ahci_handle_port_interrupt
      0.62 ± 36%    +105.3%       1.27 ± 27%  perf-profile.children.cycles-pp.ahci_handle_port_intr
      0.93 ± 37%     +92.8%       1.80 ± 25%  perf-profile.children.cycles-pp.ahci_single_level_irq_intr
      3.48 ± 19%    +108.3%       7.25 ±  8%  perf-profile.children.cycles-pp.apic_timer_interrupt
      0.22 ± 31%    +218.2%       0.70 ± 34%  perf-profile.children.cycles-pp.ast_imageblit
      0.21 ± 45%    +457.8%       1.16 ± 10%  perf-profile.children.cycles-pp.blk_delay_work
      1.03 ± 36%    +119.9%       2.27 ± 24%  perf-profile.children.cycles-pp.blk_done_softirq
      0.49 ± 40%    +137.9%       1.18 ± 28%  perf-profile.children.cycles-pp.blk_run_queue
      0.55 ± 42%    +213.6%       1.73 ± 38%  perf-profile.children.cycles-pp.call_console_drivers
     47.62 ± 18%     +56.3%      74.41 ±  4%  perf-profile.children.cycles-pp.call_cpuidle
      0.55 ± 42%    +215.0%       1.73 ± 39%  perf-profile.children.cycles-pp.console_unlock
     48.58 ± 18%     +58.4%      76.95 ±  3%  perf-profile.children.cycles-pp.cpu_startup_entry
     47.58 ± 18%     +56.3%      74.35 ±  4%  perf-profile.children.cycles-pp.cpuidle_enter
     42.12 ± 17%     +44.9%      61.02 ±  6%  perf-profile.children.cycles-pp.cpuidle_enter_state
      0.55 ± 47%     +78.5%       0.98 ± 24%  perf-profile.children.cycles-pp.crypto_shash_update
      2.17 ± 37%    +110.4%       4.56 ± 23%  perf-profile.children.cycles-pp.do_IRQ
      0.77 ± 24%     +74.8%       1.35 ± 26%  perf-profile.children.cycles-pp.do_wait
     46.87 ± 22%     -78.5%      10.06 ± 23%  perf-profile.children.cycles-pp.do_writepages
      3.43 ± 35%    +139.4%       8.22 ±  6%  perf-profile.children.cycles-pp.entry_SYSCALL_64_fastpath
      0.41 ± 58%    +220.7%       1.32 ± 10%  perf-profile.children.cycles-pp.ext4_da_get_block_prep
      1.10 ± 58%    +184.7%       3.12 ±  3%  perf-profile.children.cycles-pp.ext4_da_write_begin
      0.46 ± 46%    +181.9%       1.28 ±  5%  perf-profile.children.cycles-pp.ext4_es_lookup_extent
     42.95 ± 26%     -91.1%       3.84 ± 18%  perf-profile.children.cycles-pp.ext4_ext_map_blocks
      1.72 ± 55%    +163.6%       4.53 ±  4%  perf-profile.children.cycles-pp.ext4_file_write_iter
      0.31 ± 35%    +211.3%       0.97 ± 22%  perf-profile.children.cycles-pp.ext4_find_extent
     43.95 ± 25%     -89.3%       4.69 ± 17%  perf-profile.children.cycles-pp.ext4_map_blocks
      1.00 ± 44%     +87.2%       1.87 ± 18%  perf-profile.children.cycles-pp.ext4_split_extent
      0.88 ± 47%     +77.3%       1.56 ± 22%  perf-profile.children.cycles-pp.ext4_split_extent_at
     46.85 ± 22%     -78.5%      10.06 ± 23%  perf-profile.children.cycles-pp.ext4_writepages
     19.06 ± 28%     -96.9%       0.60 ±  6%  perf-profile.children.cycles-pp.find_get_entry
      1.19 ± 38%    +124.8%       2.67 ± 32%  perf-profile.children.cycles-pp.find_get_pages
      1.65 ± 58%    +165.7%       4.37 ±  3%  perf-profile.children.cycles-pp.generic_perform_write
      0.36 ± 52%    +215.9%       1.15 ± 10%  perf-profile.children.cycles-pp.grab_cache_page_write_begin
      1.04 ± 38%     +96.4%       2.04 ± 21%  perf-profile.children.cycles-pp.handle_fasteoi_irq
      1.05 ± 38%     +97.6%       2.07 ± 21%  perf-profile.children.cycles-pp.handle_irq
      0.98 ± 38%     +90.3%       1.86 ± 25%  perf-profile.children.cycles-pp.handle_irq_event
      0.97 ± 38%     +91.8%       1.87 ± 25%  perf-profile.children.cycles-pp.handle_irq_event_percpu
      0.76 ± 16%    +122.6%       1.70 ± 22%  perf-profile.children.cycles-pp.hrtimer_interrupt
      0.29 ± 37%    +227.1%       0.97 ± 39%  perf-profile.children.cycles-pp.io_serial_in
      1.71 ± 25%     +91.8%       3.28 ± 11%  perf-profile.children.cycles-pp.irq_enter
      2.03 ± 22%    +118.6%       4.44 ± 14%  perf-profile.children.cycles-pp.irq_exit
      0.55 ± 42%    +215.0%       1.73 ± 39%  perf-profile.children.cycles-pp.irq_work_interrupt
      0.55 ± 42%    +215.0%       1.73 ± 39%  perf-profile.children.cycles-pp.irq_work_run
      0.55 ± 42%    +215.0%       1.73 ± 39%  perf-profile.children.cycles-pp.irq_work_run_list
      0.00 ± -1%      +Inf%       1.24 ± 22%  perf-profile.children.cycles-pp.kswapd
     47.58 ± 21%     -70.6%      14.00 ± 19%  perf-profile.children.cycles-pp.kthread
      0.80 ± 17%    +128.7%       1.83 ± 21%  perf-profile.children.cycles-pp.local_apic_timer_interrupt
      1.20 ± 28%     +87.0%       2.23 ±  7%  perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
     24.30 ± 28%     -94.8%       1.27 ±  9%  perf-profile.children.cycles-pp.pagecache_get_page
      1.23 ± 39%    +118.4%       2.70 ± 31%  perf-profile.children.cycles-pp.pagevec_lookup
      0.55 ± 42%    +215.0%       1.73 ± 39%  perf-profile.children.cycles-pp.perf_duration_warn
      5.69 ± 22%    +209.2%      17.59 ± 22%  perf-profile.children.cycles-pp.poll_idle
      0.55 ± 42%    +215.0%       1.73 ± 39%  perf-profile.children.cycles-pp.printk
     47.34 ± 21%     -74.7%      11.96 ± 20%  perf-profile.children.cycles-pp.process_one_work
     15.33 ± 28%     -96.2%       0.58 ±  7%  perf-profile.children.cycles-pp.radix_tree_lookup_slot
      0.38 ± 33%    +107.3%       0.78 ± 28%  perf-profile.children.cycles-pp.read
      3.42 ± 38%    +287.3%      13.25 ± 20%  perf-profile.children.cycles-pp.rest_init
     47.58 ± 21%     -70.6%      14.01 ± 19%  perf-profile.children.cycles-pp.ret_from_fork
      2.18 ± 37%    +109.5%       4.57 ± 23%  perf-profile.children.cycles-pp.ret_from_intr
      0.34 ± 30%    +215.3%       1.08 ±  6%  perf-profile.children.cycles-pp.schedule
      0.93 ± 37%    +120.9%       2.06 ± 23%  perf-profile.children.cycles-pp.scsi_end_request
      0.97 ± 36%    +120.7%       2.13 ± 23%  perf-profile.children.cycles-pp.scsi_finish_command
      0.94 ± 37%    +121.8%       2.08 ± 24%  perf-profile.children.cycles-pp.scsi_io_completion
      0.87 ± 36%    +161.2%       2.27 ± 19%  perf-profile.children.cycles-pp.scsi_request_fn
      0.51 ± 41%    +134.6%       1.20 ± 29%  perf-profile.children.cycles-pp.scsi_run_queue
      0.99 ± 36%    +120.8%       2.17 ± 23%  perf-profile.children.cycles-pp.scsi_softirq_done
      0.48 ± 41%    +219.8%       1.54 ± 38%  perf-profile.children.cycles-pp.serial8250_console_putchar
      0.50 ± 41%    +218.9%       1.60 ± 38%  perf-profile.children.cycles-pp.serial8250_console_write
      0.00 ± -1%      +Inf%       0.84 ± 27%  perf-profile.children.cycles-pp.shrink_inactive_list
      0.00 ± -1%      +Inf%       1.23 ± 22%  perf-profile.children.cycles-pp.shrink_node
      0.00 ± -1%      +Inf%       0.84 ± 27%  perf-profile.children.cycles-pp.shrink_node_memcg
      0.00 ± -1%      +Inf%       0.80 ± 24%  perf-profile.children.cycles-pp.shrink_page_list
      3.46 ± 19%    +107.3%       7.16 ±  7%  perf-profile.children.cycles-pp.smp_apic_timer_interrupt
      0.55 ± 42%    +215.0%       1.73 ± 39%  perf-profile.children.cycles-pp.smp_irq_work_interrupt
      3.42 ± 38%    +287.3%      13.25 ± 20%  perf-profile.children.cycles-pp.start_kernel
     45.20 ± 17%     +41.2%      63.81 ±  8%  perf-profile.children.cycles-pp.start_secondary
      0.40 ± 26%    +113.1%       0.85 ± 32%  perf-profile.children.cycles-pp.sys_read
      0.80 ± 22%     +79.9%       1.43 ± 23%  perf-profile.children.cycles-pp.sys_wait4
      2.07 ± 47%    +162.2%       5.43 ±  6%  perf-profile.children.cycles-pp.sys_write
      1.31 ± 29%     +88.4%       2.47 ± 11%  perf-profile.children.cycles-pp.tick_do_update_jiffies64
      1.61 ± 26%     +90.2%       3.07 ± 10%  perf-profile.children.cycles-pp.tick_irq_enter
      0.59 ± 12%    +137.3%       1.40 ±  4%  perf-profile.children.cycles-pp.tick_nohz_irq_exit
      0.59 ± 16%    +141.3%       1.42 ±  8%  perf-profile.children.cycles-pp.tick_nohz_stop_sched_tick
      0.37 ± 42%    +173.2%       1.02 ± 11%  perf-profile.children.cycles-pp.try_to_wake_up
      0.33 ± 42%    +169.9%       0.90 ± 11%  perf-profile.children.cycles-pp.ttwu_do_activate
      0.48 ± 41%    +219.8%       1.54 ± 38%  perf-profile.children.cycles-pp.uart_console_write
      0.50 ± 41%    +218.9%       1.60 ± 38%  perf-profile.children.cycles-pp.univ8250_console_write
     38.78 ± 28%     -99.7%       0.12 ± 42%  perf-profile.children.cycles-pp.unmap_underlying_metadata
      0.39 ± 23%    +116.9%       0.84 ± 34%  perf-profile.children.cycles-pp.vfs_read
      2.04 ± 48%    +162.8%       5.38 ±  6%  perf-profile.children.cycles-pp.vfs_write
      0.55 ± 42%    +215.0%       1.73 ± 39%  perf-profile.children.cycles-pp.vprintk_default
      0.55 ± 42%    +215.0%       1.73 ± 39%  perf-profile.children.cycles-pp.vprintk_emit
      0.63 ± 22%     +78.2%       1.12 ± 21%  perf-profile.children.cycles-pp.wait_consider_task
      0.49 ± 39%    +219.9%       1.57 ± 38%  perf-profile.children.cycles-pp.wait_for_xmitr
     46.87 ± 22%     -78.5%      10.06 ± 23%  perf-profile.children.cycles-pp.wb_workfn
     46.87 ± 22%     -78.5%      10.06 ± 23%  perf-profile.children.cycles-pp.wb_writeback
     47.44 ± 21%     -74.1%      12.27 ± 19%  perf-profile.children.cycles-pp.worker_thread
     46.87 ± 22%     -78.5%      10.06 ± 23%  perf-profile.children.cycles-pp.writeback_sb_inodes
      3.42 ± 38%    +287.3%      13.25 ± 20%  perf-profile.children.cycles-pp.x86_64_start_kernel
      3.42 ± 38%    +287.3%      13.25 ± 20%  perf-profile.children.cycles-pp.x86_64_start_reservations
      3.26 ± 22%     -92.8%       0.24 ± 23%  perf-profile.self.cycles-pp.___might_sleep
      5.38 ± 30%    -100.0%       0.00 ± -1%  perf-profile.self.cycles-pp.__find_get_block_slow
      2.93 ± 23%     -95.0%       0.15 ± 33%  perf-profile.self.cycles-pp.__might_sleep
     13.51 ± 28%     -93.0%       0.95 ±  9%  perf-profile.self.cycles-pp.__radix_tree_lookup
      0.45 ± 21%    +125.6%       1.01 ± 13%  perf-profile.self.cycles-pp._raw_spin_lock
      0.36 ± 22%    +148.3%       0.89 ± 20%  perf-profile.self.cycles-pp.cpuidle_enter_state
      0.46 ± 46%    +181.9%       1.28 ±  5%  perf-profile.self.cycles-pp.ext4_es_lookup_extent
      1.06 ± 24%     -93.0%       0.07 ± 66%  perf-profile.self.cycles-pp.ext4_ext_map_blocks
      3.75 ± 30%    -100.0%       0.00 ± -1%  perf-profile.self.cycles-pp.find_get_entry
      0.46 ± 43%    +174.9%       1.26 ± 35%  perf-profile.self.cycles-pp.find_get_pages
      0.29 ± 37%    +227.1%       0.97 ± 39%  perf-profile.self.cycles-pp.io_serial_in
      1.20 ± 28%     +87.0%       2.23 ±  7%  perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
      5.50 ± 29%    -100.0%       0.00 ± -1%  perf-profile.self.cycles-pp.pagecache_get_page
      5.69 ± 22%    +209.2%      17.59 ± 22%  perf-profile.self.cycles-pp.poll_idle
      2.18 ± 33%    -100.0%       0.00 ± -1%  perf-profile.self.cycles-pp.radix_tree_lookup_slot
      3.85 ± 26%    -100.0%       0.00 ± -1%  perf-profile.self.cycles-pp.unmap_underlying_metadata
      0.60 ± 21%     +78.3%       1.07 ± 21%  perf-profile.self.cycles-pp.wait_consider_task


Thanks,
Xiaolong

------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
_______________________________________________
Linux-NTFS-Dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/linux-ntfs-dev

config-4.9.0-rc3-00264-gadad5aa (156K) Download Attachment
job-script (7K) Download Attachment
job.yaml (4K) Download Attachment
reproduce (684 bytes) Download Attachment
Loading...