syzbot |
sign-in | mailing list | source | docs | 🏰 |
| ID | Workflow | Result | Correct | Bug | Created | Started | Finished | Revision | Error |
|---|---|---|---|---|---|---|---|---|---|
| 8c3842ca-dce2-4d02-87a6-75f4c6182c13 | patching | ❓ | general protection fault in metapage_write_folio | 2026/03/16 14:08 | 2026/03/16 14:11 | 2026/03/16 16:50 | f84caa2f84dea47b51a188e8d6267f295e89bbcb |
master
f338e77383789c0cae23ca3d48adcc5e9e137e3c
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
jfs: fix NULL pointer dereference in metapage_write_folio When a directory is deleted, its index table is truncated, which removes folios from the page cache and sets folio->mapping to NULL. However, dirty metapages can still be lingering in the transaction's tlock list. When txUnlock() eventually calls release_metapage(), it attempts to write the page back via metapage_write_one(), leading to a NULL pointer dereference in metapage_write_folio(). Fix this by checking if folio->mapping is NULL in release_metapage() and force_metapage(). If it is NULL, it means the folio has been truncated. Instead of writing it back, clear the META_dirty and META_sync flags, and remove the metapage from the logsync list if necessary. Clearing META_dirty is crucial because drop_metapage() checks this flag; if it remains set, drop_metapage() returns early, causing a memory leak. Additionally, fix similar potential NULL pointer dereferences in: - insert_metapage() and remove_metapage(): On architectures with PAGE_SIZE > 4096, these functions dereference folio->mapping->host->i_blkbits. Change them to use mp->sb->s_blocksize_bits instead, which is always safely available. - last_write_complete(): Check folio->mapping before calling mapping_set_error() to prevent a crash if a folio is truncated while under writeback.
diff --git a/fs/jfs/jfs_metapage.c b/fs/jfs/jfs_metapage.c
index 64c6eaa7f..16ed69d10 100644
--- a/fs/jfs/jfs_metapage.c
+++ b/fs/jfs/jfs_metapage.c
@@ -106,7 +106,7 @@ static inline int insert_metapage(struct folio *folio, struct metapage *mp)
}
if (mp) {
- l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
+ l2mp_blocks = L2PSIZE - mp->sb->s_blocksize_bits;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
a->mp_count++;
a->mp[index] = mp;
@@ -118,7 +118,7 @@ static inline int insert_metapage(struct folio *folio, struct metapage *mp)
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a = folio->private;
- int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
+ int l2mp_blocks = L2PSIZE - mp->sb->s_blocksize_bits;
int index;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
@@ -392,7 +392,8 @@ static void last_write_complete(struct folio *folio, blk_status_t status)
if (status) {
int err = blk_status_to_errno(status);
printk(KERN_ERR "metapage_write_end_io: I/O error\n");
- mapping_set_error(folio->mapping, err);
+ if (folio->mapping)
+ mapping_set_error(folio->mapping, err);
}
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
@@ -836,9 +837,14 @@ void force_metapage(struct metapage *mp)
clear_bit(META_sync, &mp->flag);
folio_get(folio);
folio_lock(folio);
- folio_mark_dirty(folio);
- if (metapage_write_one(folio))
- jfs_error(mp->sb, "metapage_write_one() failed\n");
+ if (folio->mapping) {
+ folio_mark_dirty(folio);
+ if (metapage_write_one(folio))
+ jfs_error(mp->sb, "metapage_write_one() failed\n");
+ } else {
+ clear_bit(META_dirty, &mp->flag);
+ folio_unlock(folio);
+ }
clear_bit(META_forcewrite, &mp->flag);
folio_put(folio);
}
@@ -877,16 +883,25 @@ void release_metapage(struct metapage * mp)
return;
}
- if (test_bit(META_dirty, &mp->flag)) {
- folio_mark_dirty(folio);
- if (test_bit(META_sync, &mp->flag)) {
- clear_bit(META_sync, &mp->flag);
- if (metapage_write_one(folio))
- jfs_error(mp->sb, "metapage_write_one() failed\n");
- folio_lock(folio);
- }
- } else if (mp->lsn) /* discard_metapage doesn't remove it */
- remove_from_logsync(mp);
+ if (folio->mapping) {
+ if (test_bit(META_dirty, &mp->flag)) {
+ folio_mark_dirty(folio);
+ if (test_bit(META_sync, &mp->flag)) {
+ clear_bit(META_sync, &mp->flag);
+ if (metapage_write_one(folio))
+ jfs_error(
+ mp->sb,
+ "metapage_write_one() failed\n");
+ folio_lock(folio);
+ }
+ } else if (mp->lsn) /* discard_metapage doesn't remove it */
+ remove_from_logsync(mp);
+ } else {
+ clear_bit(META_dirty, &mp->flag);
+ clear_bit(META_sync, &mp->flag);
+ if (mp->lsn)
+ remove_from_logsync(mp);
+ }
/* Try to keep metapages from using up too much memory */
drop_metapage(folio, mp);
[map[Email:jfs-discussion@lists.sourceforge.net Name: To:true] map[Email:linux-kernel@vger.kernel.org Name: To:false] map[Email:shaggy@kernel.org Name:Dave Kleikamp To:true]]
Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN PTI
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
CPU: 0 UID: 0 PID: 122 Comm: jfsCommit Not tainted syzkaller #0 PREEMPT_{RT,(full)}
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/13/2026
RIP: 0010:metapage_write_folio+0xa7/0xcf0 fs/jfs/jfs_metapage.c:428
Code: fe 48 89 5c 24 18 4c 8d 73 18 4c 89 f0 48 c1 e8 03 42 80 3c 38 00 74 08 4c 89 f7 e8 53 1b de fe 4d 8b 36 4c 89 f0 48 c1 e8 03 <42> 80 3c 38 00 74 08 4c 89 f7 e8 3a 1b de fe 49 8b 1e 43 c6 44 3c
RSP: 0018:ffffc900039d77c0 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffffea0000ef0ac0 RCX: ffff88801bf7bc80
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc900039d78d0 R08: ffffea0000ef0af7 R09: 1ffffd40001de15e
R10: dffffc0000000000 R11: fffff940001de15f R12: 1ffff9200073af04
R13: ffffea0000ef0ac0 R14: 0000000000000000 R15: dffffc0000000000
FS: 0000000000000000(0000) GS:ffff8881267fa000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f03ecfa4000 CR3: 0000000030a4c000 CR4: 00000000003526f0
Call Trace:
<TASK>
metapage_write_one+0x2c8/0x520 fs/jfs/jfs_metapage.c:818
release_metapage+0x5b9/0xab0 fs/jfs/jfs_metapage.c:884
txUnlock+0x509/0xdc0 fs/jfs/jfs_txnmgr.c:948
txLazyCommit fs/jfs/jfs_txnmgr.c:2683 [inline]
jfs_lazycommit+0x52c/0xa10 fs/jfs/jfs_txnmgr.c:2734
kthread+0x726/0x8b0 kernel/kthread.c:463
ret_from_fork+0x51b/0xa40 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
</TASK>
Modules linked in:
---[ end trace 0000000000000000 ]---
RIP: 0010:metapage_write_folio+0xa7/0xcf0 fs/jfs/jfs_metapage.c:428
Code: fe 48 89 5c 24 18 4c 8d 73 18 4c 89 f0 48 c1 e8 03 42 80 3c 38 00 74 08 4c 89 f7 e8 53 1b de fe 4d 8b 36 4c 89 f0 48 c1 e8 03 <42> 80 3c 38 00 74 08 4c 89 f7 e8 3a 1b de fe 49 8b 1e 43 c6 44 3c
RSP: 0018:ffffc900039d77c0 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffffea0000ef0ac0 RCX: ffff88801bf7bc80
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc900039d78d0 R08: ffffea0000ef0af7 R09: 1ffffd40001de15e
R10: dffffc0000000000 R11: fffff940001de15f R12: 1ffff9200073af04
R13: ffffea0000ef0ac0 R14: 0000000000000000 R15: dffffc0000000000
FS: 0000000000000000(0000) GS:ffff8881267fa000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f03ecfa4000 CR3: 0000000030a4c000 CR4: 00000000003526f0
----------------
Code disassembly (best guess):
0: fe 48 89 decb -0x77(%rax)
3: 5c pop %rsp
4: 24 18 and $0x18,%al
6: 4c 8d 73 18 lea 0x18(%rbx),%r14
a: 4c 89 f0 mov %r14,%rax
d: 48 c1 e8 03 shr $0x3,%rax
11: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1)
16: 74 08 je 0x20
18: 4c 89 f7 mov %r14,%rdi
1b: e8 53 1b de fe call 0xfede1b73
20: 4d 8b 36 mov (%r14),%r14
23: 4c 89 f0 mov %r14,%rax
26: 48 c1 e8 03 shr $0x3,%rax
* 2a: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1) <-- trapping instruction
2f: 74 08 je 0x39
31: 4c 89 f7 mov %r14,%rdi
34: e8 3a 1b de fe call 0xfede1b73
39: 49 8b 1e mov (%r14),%rbx
3c: 43 rex.XB
3d: c6 .byte 0xc6
3e: 44 rex.R
3f: 3c .byte 0x3c
| Seq | Timestamp | Type | Name | Duration |
|---|---|---|---|---|
| 0/0 | 2026/03/16 14:11 | flow | patching |
2h39mResults:map[KernelBranch:master KernelCommit:f338e77383789c0cae23ca3d48adcc5e9e137e3c KernelRepo:git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git PatchDescription:jfs: fix NULL pointer dereference in metapage_write_folio
When a directory is deleted, its index table is truncated, which removes
folios from the page cache and sets folio->mapping to NULL. However,
dirty metapages can still be lingering in the transaction's tlock list.
When txUnlock() eventually calls release_metapage(), it attempts to
write the page back via metapage_write_one(), leading to a NULL pointer
dereference in metapage_write_folio().
Fix this by checking if folio->mapping is NULL in release_metapage() and
force_metapage(). If it is NULL, it means the folio has been truncated.
Instead of writing it back, clear the META_dirty and META_sync flags,
and remove the metapage from the logsync list if necessary. Clearing
META_dirty is crucial because drop_metapage() checks this flag; if it
remains set, drop_metapage() returns early, causing a memory leak.
Additionally, fix similar potential NULL pointer dereferences in:
- insert_metapage() and remove_metapage(): On architectures with
PAGE_SIZE > 4096, these functions dereference
folio->mapping->host->i_blkbits. Change them to use
mp->sb->s_blocksize_bits instead, which is always safely available.
- last_write_complete(): Check folio->mapping before calling
mapping_set_error() to prevent a crash if a folio is truncated while
under writeback. PatchDiff:diff --git a/fs/jfs/jfs_metapage.c b/fs/jfs/jfs_metapage.c
index 64c6eaa7f..16ed69d10 100644
--- a/fs/jfs/jfs_metapage.c
+++ b/fs/jfs/jfs_metapage.c
@@ -106,7 +106,7 @@ static inline int insert_metapage(struct folio *folio, struct metapage *mp)
}
if (mp) {
- l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
+ l2mp_blocks = L2PSIZE - mp->sb->s_blocksize_bits;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
a->mp_count++;
a->mp[index] = mp;
@@ -118,7 +118,7 @@ static inline int insert_metapage(struct folio *folio, struct metapage *mp)
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a = folio->private;
- int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
+ int l2mp_blocks = L2PSIZE - mp->sb->s_blocksize_bits;
int index;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
@@ -392,7 +392,8 @@ static void last_write_complete(struct folio *folio, blk_status_t status)
if (status) {
int err = blk_status_to_errno(status);
printk(KERN_ERR "metapage_write_end_io: I/O error\n");
- mapping_set_error(folio->mapping, err);
+ if (folio->mapping)
+ mapping_set_error(folio->mapping, err);
}
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
@@ -836,9 +837,14 @@ void force_metapage(struct metapage *mp)
clear_bit(META_sync, &mp->flag);
folio_get(folio);
folio_lock(folio);
- folio_mark_dirty(folio);
- if (metapage_write_one(folio))
- jfs_error(mp->sb, "metapage_write_one() failed\n");
+ if (folio->mapping) {
+ folio_mark_dirty(folio);
+ if (metapage_write_one(folio))
+ jfs_error(mp->sb, "metapage_write_one() failed\n");
+ } else {
+ clear_bit(META_dirty, &mp->flag);
+ folio_unlock(folio);
+ }
clear_bit(META_forcewrite, &mp->flag);
folio_put(folio);
}
@@ -877,16 +883,25 @@ void release_metapage(struct metapage * mp)
return;
}
- if (test_bit(META_dirty, &mp->flag)) {
- folio_mark_dirty(folio);
- if (test_bit(META_sync, &mp->flag)) {
- clear_bit(META_sync, &mp->flag);
- if (metapage_write_one(folio))
- jfs_error(mp->sb, "metapage_write_one() failed\n");
- folio_lock(folio);
- }
- } else if (mp->lsn) /* discard_metapage doesn't remove it */
- remove_from_logsync(mp);
+ if (folio->mapping) {
+ if (test_bit(META_dirty, &mp->flag)) {
+ folio_mark_dirty(folio);
+ if (test_bit(META_sync, &mp->flag)) {
+ clear_bit(META_sync, &mp->flag);
+ if (metapage_write_one(folio))
+ jfs_error(
+ mp->sb,
+ "metapage_write_one() failed\n");
+ folio_lock(folio);
+ }
+ } else if (mp->lsn) /* discard_metapage doesn't remove it */
+ remove_from_logsync(mp);
+ } else {
+ clear_bit(META_dirty, &mp->flag);
+ clear_bit(META_sync, &mp->flag);
+ if (mp->lsn)
+ remove_from_logsync(mp);
+ }
/* Try to keep metapages from using up too much memory */
drop_metapage(folio, mp);
Recipients:[map[Email:jfs-discussion@lists.sourceforge.net Name: To:true] map[Email:linux-kernel@vger.kernel.org Name: To:false] map[Email:shaggy@kernel.org Name:Dave Kleikamp To:true]]] |
| 1/1 | 2026/03/16 14:11 | action | base-commit-picker |
0mResults:map[KernelBranch:master KernelCommit:f338e77383789c0cae23ca3d48adcc5e9e137e3c KernelRepo:git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git] |
| 2/1 | 2026/03/16 14:11 | action | kernel-checkouter |
0mResults:map[KernelSrc:/usr/local/google/home/dvyukov/syzkaller/agent/workdir/cache/src/f9e37ab80228ecb82635cab5d0ffa3c2484b7a3e] |
| 3/1 | 2026/03/16 14:11 | action | kernel-builder |
5mResults:map[KernelObj:/usr/local/google/home/dvyukov/syzkaller/agent/workdir/cache/build/e791ae2cc3833ec7c63ec2aea0e9f97959abdc4b] |
| 4/1 | 2026/03/16 14:16 | action | crash-reproducer |
2mResults:map[ReproducedBugTitle:general protection fault in metapage_write_folio ReproducedCrashReport:Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN NOPTI
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
CPU: 1 UID: 0 PID: 121 Comm: jfsCommit Not tainted syzkaller #1 PREEMPT_{RT,(full)}
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.17.0-debian-1.17.0-1 04/01/2014
RIP: 0010:metapage_write_folio+0xa8/0xca0 fs/jfs/jfs_metapage.c:428
Code: e8 cd 34 78 fe 4d 8d 74 24 18 4c 89 f0 48 c1 e8 03 42 80 3c 38 00 74 08 4c 89 f7 e8 82 91 dc fe 4d 8b 36 4c 89 f0 48 c1 e8 03 <42> 80 3c 38 00 74 08 4c 89 f7 e8 69 91 dc fe 49 8b 1e 4d 8d 74 24
RSP: 0018:ffffc90002197840 EFLAGS: 00010256
RAX: 0000000000000000 RBX: 1ffffd400023200e RCX: ffff88801e04bd80
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc90002197950 R08: ffffea0001190077 R09: 1ffffd400023200e
R10: dffffc0000000000 R11: fffff9400023200f R12: ffffea0001190040
R13: 0000000000000081 R14: 0000000000000000 R15: dffffc0000000000
FS: 0000000000000000(0000) GS:ffff8880ecaf3000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f1b44e236e0 CR3: 000000000d9be000 CR4: 0000000000750ef0
PKRU: 55555554
Call Trace:
<TASK>
metapage_write_one+0x24c/0x450 fs/jfs/jfs_metapage.c:818
release_metapage+0x5b9/0xab0 fs/jfs/jfs_metapage.c:884
txUnlock+0x509/0xdc0 fs/jfs/jfs_txnmgr.c:948
txLazyCommit fs/jfs/jfs_txnmgr.c:2683 [inline]
jfs_lazycommit+0x52e/0xa10 fs/jfs/jfs_txnmgr.c:2734
kthread+0x389/0x480 kernel/kthread.c:436
ret_from_fork+0x513/0xba0 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
Modules linked in:
---[ end trace 0000000000000000 ]---
RIP: 0010:metapage_write_folio+0xa8/0xca0 fs/jfs/jfs_metapage.c:428
Code: e8 cd 34 78 fe 4d 8d 74 24 18 4c 89 f0 48 c1 e8 03 42 80 3c 38 00 74 08 4c 89 f7 e8 82 91 dc fe 4d 8b 36 4c 89 f0 48 c1 e8 03 <42> 80 3c 38 00 74 08 4c 89 f7 e8 69 91 dc fe 49 8b 1e 4d 8d 74 24
RSP: 0018:ffffc90002197840 EFLAGS: 00010256
RAX: 0000000000000000 RBX: 1ffffd400023200e RCX: ffff88801e04bd80
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc90002197950 R08: ffffea0001190077 R09: 1ffffd400023200e
R10: dffffc0000000000 R11: fffff9400023200f R12: ffffea0001190040
R13: 0000000000000081 R14: 0000000000000000 R15: dffffc0000000000
FS: 0000000000000000(0000) GS:ffff8880ecaf3000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f1b44e236e0 CR3: 000000000d9be000 CR4: 0000000000750ef0
PKRU: 55555554
----------------
Code disassembly (best guess):
0: e8 cd 34 78 fe call 0xfe7834d2
5: 4d 8d 74 24 18 lea 0x18(%r12),%r14
a: 4c 89 f0 mov %r14,%rax
d: 48 c1 e8 03 shr $0x3,%rax
11: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1)
16: 74 08 je 0x20
18: 4c 89 f7 mov %r14,%rdi
1b: e8 82 91 dc fe call 0xfedc91a2
20: 4d 8b 36 mov (%r14),%r14
23: 4c 89 f0 mov %r14,%rax
26: 48 c1 e8 03 shr $0x3,%rax
* 2a: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1) <-- trapping instruction
2f: 74 08 je 0x39
31: 4c 89 f7 mov %r14,%rdi
34: e8 69 91 dc fe call 0xfedc91a2
39: 49 8b 1e mov (%r14),%rbx
3c: 4d rex.WRB
3d: 8d .byte 0x8d
3e: 74 24 je 0x64
] |
| 5/1 | 2026/03/16 14:19 | action | codesearch-prepare |
5mResults:map[Index:codesearch-index] |
| 6/1 | 2026/03/16 14:24 | agent | debugger |
2h02mModel:gemini-3.1-pro-preview Instruction: You are an experienced Linux kernel developer tasked with debugging a kernel crash root cause. You need to provide a detailed explanation of the root cause for another developer to be able to write a fix for the bug based on your explanation. Include all relevant details into the response: function/struct/field/etc names, code snippets, line numbers, macro/enum values, etc. Prefer calling several tools at the same time to save round-trips. Prompt:
The crash is:
Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN NOPTI
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
CPU: 1 UID: 0 PID: 121 Comm: jfsCommit Not tainted syzkaller #1 PREEMPT_{RT,(full)}
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.17.0-debian-1.17.0-1 04/01/2014
RIP: 0010:metapage_write_folio+0xa8/0xca0 fs/jfs/jfs_metapage.c:428
Code: e8 cd 34 78 fe 4d 8d 74 24 18 4c 89 f0 48 c1 e8 03 42 80 3c 38 00 74 08 4c 89 f7 e8 82 91 dc fe 4d 8b 36 4c 89 f0 48 c1 e8 03 <42> 80 3c 38 00 74 08 4c 89 f7 e8 69 91 dc fe 49 8b 1e 4d 8d 74 24
RSP: 0018:ffffc90002197840 EFLAGS: 00010256
RAX: 0000000000000000 RBX: 1ffffd400023200e RCX: ffff88801e04bd80
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc90002197950 R08: ffffea0001190077 R09: 1ffffd400023200e
R10: dffffc0000000000 R11: fffff9400023200f R12: ffffea0001190040
R13: 0000000000000081 R14: 0000000000000000 R15: dffffc0000000000
FS: 0000000000000000(0000) GS:ffff8880ecaf3000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f1b44e236e0 CR3: 000000000d9be000 CR4: 0000000000750ef0
PKRU: 55555554
Call Trace:
<TASK>
metapage_write_one+0x24c/0x450 fs/jfs/jfs_metapage.c:818
release_metapage+0x5b9/0xab0 fs/jfs/jfs_metapage.c:884
txUnlock+0x509/0xdc0 fs/jfs/jfs_txnmgr.c:948
txLazyCommit fs/jfs/jfs_txnmgr.c:2683 [inline]
jfs_lazycommit+0x52e/0xa10 fs/jfs/jfs_txnmgr.c:2734
kthread+0x389/0x480 kernel/kthread.c:436
ret_from_fork+0x513/0xba0 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
Modules linked in:
---[ end trace 0000000000000000 ]---
RIP: 0010:metapage_write_folio+0xa8/0xca0 fs/jfs/jfs_metapage.c:428
Code: e8 cd 34 78 fe 4d 8d 74 24 18 4c 89 f0 48 c1 e8 03 42 80 3c 38 00 74 08 4c 89 f7 e8 82 91 dc fe 4d 8b 36 4c 89 f0 48 c1 e8 03 <42> 80 3c 38 00 74 08 4c 89 f7 e8 69 91 dc fe 49 8b 1e 4d 8d 74 24
RSP: 0018:ffffc90002197840 EFLAGS: 00010256
RAX: 0000000000000000 RBX: 1ffffd400023200e RCX: ffff88801e04bd80
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc90002197950 R08: ffffea0001190077 R09: 1ffffd400023200e
R10: dffffc0000000000 R11: fffff9400023200f R12: ffffea0001190040
R13: 0000000000000081 R14: 0000000000000000 R15: dffffc0000000000
FS: 0000000000000000(0000) GS:ffff8880ecaf3000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f1b44e236e0 CR3: 000000000d9be000 CR4: 0000000000750ef0
PKRU: 55555554
----------------
Code disassembly (best guess):
0: e8 cd 34 78 fe call 0xfe7834d2
5: 4d 8d 74 24 18 lea 0x18(%r12),%r14
a: 4c 89 f0 mov %r14,%rax
d: 48 c1 e8 03 shr $0x3,%rax
11: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1)
16: 74 08 je 0x20
18: 4c 89 f7 mov %r14,%rdi
1b: e8 82 91 dc fe call 0xfedc91a2
20: 4d 8b 36 mov (%r14),%r14
23: 4c 89 f0 mov %r14,%rax
26: 48 c1 e8 03 shr $0x3,%rax
* 2a: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1) <-- trapping instruction
2f: 74 08 je 0x39
31: 4c 89 f7 mov %r14,%rdi
34: e8 69 91 dc fe call 0xfedc91a2
39: 49 8b 1e mov (%r14),%rbx
3c: 4d rex.WRB
3d: 8d .byte 0x8d
3e: 74 24 je 0x64
Reply: The crash is a NULL pointer dereference in `metapage_write_folio()` caused by `folio->mapping` being `NULL`.
### Root Cause Analysis
In JFS, directory index tables are stored in the directory inode's mapping (`ip->i_mapping`) and accessed via metapages.
When a directory is deleted, the following sequence of events occurs:
1. `jfs_rmdir()` removes the directory entry from its parent. If the directory becomes empty, `dtInitRoot()` is called.
2. `dtInitRoot()` truncates the directory's index table by calling `xtTruncate(tid, ip, 0, COMMIT_PWMAP)` and sets the `COMMIT_Stale` flag on the child directory.
3. The caller (e.g., `jfs_rmdir` or `jfs_unlink`) checks the `COMMIT_Stale` flag and calls `jfs_truncate_nolock(ip, 0)`.
4. `jfs_truncate_nolock()` calls `truncate_setsize(ip, 0)`, which removes all folios of the directory's index table from the page cache. As part of this removal, `folio->mapping` is set to `NULL`.
However, the dirty metapages of the index table (modified during the deletion process) are still in the `tlock` list of the transaction.
When the transaction is eventually committed, `txUnlock()` is called (often asynchronously by the `jfs_lazycommit` kthread).
1. `txUnlock()` calls `put_metapage(mp)`, which in turn calls `release_metapage(mp)`.
2. `release_metapage()` sees that `test_bit(META_dirty, &mp->flag)` is true, and attempts to write the page back by calling `metapage_write_one(folio)`.
3. `metapage_write_one()` calls `metapage_write_folio(folio, &wbc)`.
4. `metapage_write_folio()` executes `struct inode *inode = folio->mapping->host;`. Since `folio->mapping` is `NULL`, this results in a general protection fault / NULL pointer dereference.
### How to Fix
To fix this, `release_metapage()` and `force_metapage()` must check if `folio->mapping` is `NULL` before attempting to write the metapage. If it is `NULL`, it means the folio has been truncated and removed from the page cache. The metapage should not be written back. Instead, we should clear the `META_dirty` and `META_sync` flags, and remove the metapage from the logsync list if necessary.
Clearing the `META_dirty` flag is crucial because `drop_metapage()` checks this flag; if it remains set, `drop_metapage()` will return early, causing a permanent memory leak of the metapage structure.
**1. Fix in `release_metapage` (`fs/jfs/jfs_metapage.c:880`):**
```c
if (folio->mapping) {
if (test_bit(META_dirty, &mp->flag)) {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
} else if (mp->lsn) /* discard_metapage doesn't remove it */
remove_from_logsync(mp);
} else {
clear_bit(META_dirty, &mp->flag);
clear_bit(META_sync, &mp->flag);
if (mp->lsn)
remove_from_logsync(mp);
}
```
**2. Fix in `force_metapage` (`fs/jfs/jfs_metapage.c:839`):**
```c
folio_lock(folio);
if (folio->mapping) {
folio_mark_dirty(folio);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
} else {
clear_bit(META_dirty, &mp->flag);
folio_unlock(folio);
}
clear_bit(META_forcewrite, &mp->flag);
```
**Additional Note for architectures with `PAGE_SIZE > 4096`:**
On architectures where `PAGE_SIZE > 4096` (e.g., 64K pages on ARM64/PPC), `insert_metapage()` and `remove_metapage()` also dereference `folio->mapping->host->i_blkbits`. To prevent similar crashes there when `folio->mapping` is `NULL`, they should be changed to use `mp->sb->s_blocksize_bits` instead, which is always safely available and identical to `i_blkbits` in JFS. |
| 7/2 | 2026/03/16 14:24 | llm | debugger |
19mModel:gemini-3.1-pro-preview Tokens: input: 5067 output: 49 thoughts: 707 Thoughts: **Analyzing a KASAN Null-Pointer Dereference in JFS** Okay, so I've got a general protection fault. It's happening within `metapage_write_folio` at `fs/jfs/jfs_metapage.c:428`. The telltale sign is a KASAN null-pointer dereference – the dreaded `[0x0000000000000000-0x0000000000000007]` range. The offending instruction is the KASAN check `cmpb $0x0,(%rax,%r15,1)`. Let's break down the registers. `%r15` is acting as the KASAN shadow offset, predictably `dffffc0000000000`. And `%rax`, the base pointer for the check, is, of course, zero. Before the right shift (`shr $0x3, %rax`), `%r14` was also zero. I'm going to dig into the disassembly to see how this value got there. Here's what I've got. The code loads `0x18(%r12)` into `%r14`. Now, `%r12` is `ffffea0001190040`, so `0x18(%r12)` resolves to `ffffea0001190058`. The next instruction then reads the *contents* of `(%r14)` which happens to be a null pointer. So, `%r14` becomes zero and then the code proceeds to perform the KASAN check on that zero address, causing the crash. Looks like a read of a bad pointer. I will examine `fs/jfs/jfs_metapage.c:428` to understand where this null pointer is coming from, and why we're dereferencing it here. That's the key. |
| 8/2 | 2026/03/16 14:44 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:metapage_write_folio] Results: map[SourceCode: 423: static int metapage_write_folio(struct folio *folio,
424: struct writeback_control *wbc)
425: {
426: struct bio *bio = NULL;
427: int block_offset; /* block offset of mp within page */
428: struct inode *inode = folio->mapping->host;
429: int blocks_per_mp = JFS_SBI(inode->i_sb)->nbperpage;
430: int len;
431: int xlen;
432: struct metapage *mp;
433: int redirty = 0;
434: sector_t lblock;
435: int nr_underway = 0;
436: sector_t pblock;
437: sector_t next_block = 0;
438: sector_t page_start;
439: unsigned long bio_bytes = 0;
440: unsigned long bio_offset = 0;
441: int offset;
442: int bad_blocks = 0;
443:
444: page_start = folio_pos(folio) >> inode->i_blkbits;
445: BUG_ON(!folio_test_locked(folio));
446: BUG_ON(folio_test_writeback(folio));
447: folio_start_writeback(folio);
448:
449: for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
450: mp = folio_to_mp(folio, offset);
451:
452: if (!mp || !test_bit(META_dirty, &mp->flag))
453: continue;
454:
455: if (mp->nohomeok && !test_bit(META_forcewrite, &mp->flag)) {
456: redirty = 1;
457: /*
458: * Make sure this page isn't blocked indefinitely.
459: * If the journal isn't undergoing I/O, push it
460: */
461: if (mp->log && !(mp->log->cflag & logGC_PAGEOUT))
462: jfs_flush_journal(mp->log, 0);
463: continue;
464: }
465:
466: clear_bit(META_dirty, &mp->flag);
467: set_bit(META_io, &mp->flag);
468: block_offset = offset >> inode->i_blkbits;
469: lblock = page_start + block_offset;
470: if (bio) {
471: if (xlen && lblock == next_block) {
472: /* Contiguous, in memory & on disk */
473: len = min(xlen, blocks_per_mp);
474: xlen -= len;
475: bio_bytes += len << inode->i_blkbits;
476: continue;
477: }
478: /* Not contiguous */
479: bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
480: /*
481: * Increment counter before submitting i/o to keep
482: * count from hitting zero before we're through
483: */
484: inc_io(folio);
485: if (!bio->bi_iter.bi_size)
486: goto dump_bio;
487: submit_bio(bio);
488: nr_underway++;
489: bio = NULL;
490: } else
491: inc_io(folio);
492: xlen = (folio_size(folio) - offset) >> inode->i_blkbits;
493: pblock = metapage_get_blocks(inode, lblock, &xlen);
494: if (!pblock) {
495: printk(KERN_ERR "JFS: metapage_get_blocks failed\n");
496: /*
497: * We already called inc_io(), but can't cancel it
498: * with dec_io() until we're done with the page
499: */
500: bad_blocks++;
501: continue;
502: }
503: len = min(xlen, (int)JFS_SBI(inode->i_sb)->nbperpage);
504:
505: bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_WRITE, GFP_NOFS);
506: bio->bi_iter.bi_sector = pblock << (inode->i_blkbits - 9);
507: bio->bi_end_io = metapage_write_end_io;
508: bio->bi_private = folio;
509:
510: /* Don't call bio_add_page yet, we may add to this vec */
511: bio_offset = offset;
512: bio_bytes = len << inode->i_blkbits;
513:
514: xlen -= len;
515: next_block = lblock + len;
516: }
517: if (bio) {
518: bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
519: if (!bio->bi_iter.bi_size)
520: goto dump_bio;
521:
522: submit_bio(bio);
523: nr_underway++;
524: }
525: if (redirty)
526: folio_redirty_for_writepage(wbc, folio);
527:
528: folio_unlock(folio);
529:
530: if (bad_blocks)
531: goto err_out;
532:
533: if (nr_underway == 0)
534: folio_end_writeback(folio);
535:
536: return 0;
537: dump_bio:
538: print_hex_dump(KERN_ERR, "JFS: dump of bio: ", DUMP_PREFIX_ADDRESS, 16,
539: 4, bio, sizeof(*bio), 0);
540: bio_put(bio);
541: folio_unlock(folio);
542: dec_io(folio, BLK_STS_OK, last_write_complete);
543: err_out:
544: while (bad_blocks--)
545: dec_io(folio, BLK_STS_OK, last_write_complete);
546: return -EIO;
547: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 9/2 | 2026/03/16 14:44 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 8227 output: 49 thoughts: 89 Thoughts: **Analyzing a Potential NULL Pointer Dereference in JFS** Okay, so I'm digging into this JFS code and I see this line: `struct inode *inode = folio->mapping->host;` in line 428. My first thought is "uh oh, potential NULL dereference!" If `folio->mapping` happens to be NULL, which is totally plausible in this context, we're going to crash trying to access `->host`. Gotta track this down. Let's figure out where `metapage_write_folio` is actually called. Ah, looks like it's being called from `metapage_write_one`, found at `fs/jfs/jfs_metapage.c:818`. Good, now I have a starting point and can follow the call stack to understand the context and see if this is a real problem and how we can best address it. Time to dig deeper and see what's going on upstream. |
| 10/2 | 2026/03/16 14:45 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:metapage_write_one] Results: map[SourceCode: 803: static int metapage_write_one(struct folio *folio)
804: {
805: struct address_space *mapping = folio->mapping;
806: struct writeback_control wbc = {
807: .sync_mode = WB_SYNC_ALL,
808: .nr_to_write = folio_nr_pages(folio),
809: };
810: int ret = 0;
811:
812: BUG_ON(!folio_test_locked(folio));
813:
814: folio_wait_writeback(folio);
815:
816: if (folio_clear_dirty_for_io(folio)) {
817: folio_get(folio);
818: ret = metapage_write_folio(folio, &wbc);
819: if (ret == 0)
820: folio_wait_writeback(folio);
821: folio_put(folio);
822: } else {
823: folio_unlock(folio);
824: }
825:
826: if (!ret)
827: ret = filemap_check_errors(mapping);
828: return ret;
829: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 11/2 | 2026/03/16 14:45 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 8974 output: 47 thoughts: 39 Thoughts: Okay, here's my interpretation of that thought process, from an expert's perspective: **Diving Deep into Metapage Management** Alright, let's break this down. My mind immediately flags `metapage_write_one`. That's a crucial function in any filesystem that deals with metadata caching. It's the building block, the atomic operation for writing a *single* metapage. The key here is *single*. We're not talking about bulk writes or asynchronous operations necessarily. This is about ensuring a specific metapage gets committed. Now, the interesting bit. The call comes from `release_metapage` at `fs/jfs/jfs_metapage.c:884`. Okay, so `release_metapage` is the caller. That means we're dealing with a release operation. The metapage is being *released*, likely from a cache or working set. This implies we've had a write, or modifications, to that particular metapage, and now it's time to persist those changes to disk. It's the critical link between volatile memory and persistent storage. This specific function, at this location, tells me something about JFS's metapage handling: it's likely a relatively low-level operation, possibly within a direct-I/O scheme. The function is responsible for ensuring that the metadata changes are written to the disk when a metapage is released. I'd need to confirm the filesystem implementation details to be sure, of course. My immediate next step would be to check surrounding code. What are the locks involved? What's the context of the release (is it a cache flush, a transaction commit, or something else)? |
| 12/2 | 2026/03/16 14:47 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:release_metapage] Results: map[SourceCode: 865: void release_metapage(struct metapage * mp)
866: {
867: struct folio *folio = mp->folio;
868: jfs_info("release_metapage: mp = 0x%p, flag = 0x%lx", mp, mp->flag);
869:
870: folio_lock(folio);
871: unlock_metapage(mp);
872:
873: assert(mp->count);
874: if (--mp->count || mp->nohomeok) {
875: folio_unlock(folio);
876: folio_put(folio);
877: return;
878: }
879:
880: if (test_bit(META_dirty, &mp->flag)) {
881: folio_mark_dirty(folio);
882: if (test_bit(META_sync, &mp->flag)) {
883: clear_bit(META_sync, &mp->flag);
884: if (metapage_write_one(folio))
885: jfs_error(mp->sb, "metapage_write_one() failed\n");
886: folio_lock(folio);
887: }
888: } else if (mp->lsn) /* discard_metapage doesn't remove it */
889: remove_from_logsync(mp);
890:
891: /* Try to keep metapages from using up too much memory */
892: drop_metapage(folio, mp);
893:
894: folio_unlock(folio);
895: folio_put(folio);
896: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 13/2 | 2026/03/16 14:47 | llm | debugger |
9mModel:gemini-3.1-pro-preview Tokens: input: 9948 output: 46 thoughts: 149 Thoughts: **Analyzing the Potential Issue in `release_metapage`** Okay, so I'm looking at `release_metapage` and specifically how it interacts with the `mp->folio` structure. I see that `mp->folio` is being used, which is standard. My concern is around the potential for a `NULL` `folio->mapping`. If `folio->mapping` is `NULL`, it signifies the folio has either been truncated or removed from the page cache. That's fine, it happens. But, here's the rub: `metapage_write_one` then calls `folio_clear_dirty_for_io(folio)`. Now, if the `folio->mapping` *is* `NULL`, I need to be sure the function handles it correctly. Specifically, I’m thinking about what happens if `folio->mapping` is `NULL` *before* the call to `folio_clear_dirty_for_io`. Could `folio_clear_dirty_for_io` still return true (indicating the folio *was* dirty)? I’m thinking... if `folio->mapping` is null, it should still clear the dirty bit and return true if it was dirty, right? I'd better double-check the code for `folio_clear_dirty_for_io`. Let's take a look. |
| 14/2 | 2026/03/16 14:56 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:mm/page-writeback.c IncludeLines:true Name:folio_clear_dirty_for_io] Results: map[SourceCode:2837: /*
2838: * Clear a folio's dirty flag, while caring for dirty memory accounting.
2839: * Returns true if the folio was previously dirty.
2840: *
2841: * This is for preparing to put the folio under writeout. We leave
2842: * the folio tagged as dirty in the xarray so that a concurrent
2843: * write-for-sync can discover it via a PAGECACHE_TAG_DIRTY walk.
2844: * The ->writepage implementation will run either folio_start_writeback()
2845: * or folio_mark_dirty(), at which stage we bring the folio's dirty flag
2846: * and xarray dirty tag back into sync.
2847: *
2848: * This incoherency between the folio's dirty flag and xarray tag is
2849: * unfortunate, but it only exists while the folio is locked.
2850: */
2851: bool folio_clear_dirty_for_io(struct folio *folio)
2852: {
2853: struct address_space *mapping = folio_mapping(folio);
2854: bool ret = false;
2855:
2856: VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
2857:
2858: if (mapping && mapping_can_writeback(mapping)) {
2859: struct inode *inode = mapping->host;
2860: struct bdi_writeback *wb;
2861: struct wb_lock_cookie cookie = {};
2862:
2863: /*
2864: * Yes, Virginia, this is indeed insane.
2865: *
2866: * We use this sequence to make sure that
2867: * (a) we account for dirty stats properly
2868: * (b) we tell the low-level filesystem to
2869: * mark the whole folio dirty if it was
2870: * dirty in a pagetable. Only to then
2871: * (c) clean the folio again and return 1 to
2872: * cause the writeback.
2873: *
2874: * This way we avoid all nasty races with the
2875: * dirty bit in multiple places and clearing
2876: * them concurrently from different threads.
2877: *
2878: * Note! Normally the "folio_mark_dirty(folio)"
2879: * has no effect on the actual dirty bit - since
2880: * that will already usually be set. But we
2881: * need the side effects, and it can help us
2882: * avoid races.
2883: *
2884: * We basically use the folio "master dirty bit"
2885: * as a serialization point for all the different
2886: * threads doing their things.
2887: */
2888: if (folio_mkclean(folio))
2889: folio_mark_dirty(folio);
2890: /*
2891: * We carefully synchronise fault handlers against
2892: * installing a dirty pte and marking the folio dirty
2893: * at this point. We do this by having them hold the
2894: * page lock while dirtying the folio, and folios are
2895: * always locked coming in here, so we get the desired
2896: * exclusion.
2897: */
2898: wb = unlocked_inode_to_wb_begin(inode, &cookie);
2899: if (folio_test_clear_dirty(folio)) {
2900: long nr = folio_nr_pages(folio);
2901: lruvec_stat_mod_folio(folio, NR_FILE_DIRTY, -nr);
2902: zone_stat_mod_folio(folio, NR_ZONE_WRITE_PENDING, -nr);
2903: wb_stat_mod(wb, WB_RECLAIMABLE, -nr);
2904: ret = true;
2905: }
2906: unlocked_inode_to_wb_end(inode, &cookie);
2907: return ret;
2908: }
2909: return folio_test_clear_dirty(folio);
2910: }
SourceFile:mm/page-writeback.c] |
| 15/2 | 2026/03/16 14:56 | llm | debugger |
18mModel:gemini-3.1-pro-preview Tokens: input: 11642 output: 44 thoughts: 192 Thoughts: **Debugging a Truncation-Related Crash** Okay, so I'm digging into this crash, and it seems like `folio_clear_dirty_for_io` is returning a value, specifically `folio_test_clear_dirty(folio)`, when the `mapping` pointer is NULL. That's immediately interesting. It implies that a dirty folio somehow exists after a truncation, which seems counterintuitive. The scenario unfolds: a dirty folio, now truncated (hence `mapping` is NULL), leads to `folio_clear_dirty_for_io` returning true. Then, `metapage_write_one` is called, which in turn invokes `metapage_write_folio(folio, &wbc)`. That's where things go south. Inside `metapage_write_folio`, it tries to dereference `folio->mapping->host`, but `folio->mapping` is NULL, leading to a crash. Makes sense. But wait a minute... why is the folio still considered dirty *after* a truncation where the mapping is NULL? And why is `release_metapage` even being called in this context? Ah, looking at the call stack, I see it's triggered from `txUnlock` at `fs/jfs/jfs_txnmgr.c:948`. That gives me a new lead to investigate how the journaling and locking interactions are causing this state. Time to go deeper into the JFS transaction management code. |
| 16/2 | 2026/03/16 15:14 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_txnmgr.c IncludeLines:true Name:txUnlock] Results: map[SourceCode: 895: /*
896: * NAME: txUnlock()
897: *
898: * FUNCTION: Initiates pageout of pages modified by tid in journalled
899: * objects and frees their lockwords.
900: */
901: static void txUnlock(struct tblock * tblk)
902: {
903: struct tlock *tlck;
904: struct linelock *linelock;
905: lid_t lid, next, llid, k;
906: struct metapage *mp;
907: struct jfs_log *log;
908: int difft, diffp;
909: unsigned long flags;
910:
911: jfs_info("txUnlock: tblk = 0x%p", tblk);
912: log = JFS_SBI(tblk->sb)->log;
913:
914: /*
915: * mark page under tlock homeok (its log has been written):
916: */
917: for (lid = tblk->next; lid; lid = next) {
918: tlck = lid_to_tlock(lid);
919: next = tlck->next;
920:
921: jfs_info("unlocking lid = %d, tlck = 0x%p", lid, tlck);
922:
923: /* unbind page from tlock */
924: if ((mp = tlck->mp) != NULL &&
925: (tlck->type & tlckBTROOT) == 0) {
926: assert(mp->xflag & COMMIT_PAGE);
927:
928: /* hold buffer
929: */
930: hold_metapage(mp);
931:
932: assert(mp->nohomeok > 0);
933: _metapage_homeok(mp);
934:
935: /* inherit younger/larger clsn */
936: LOGSYNC_LOCK(log, flags);
937: if (mp->clsn) {
938: logdiff(difft, tblk->clsn, log);
939: logdiff(diffp, mp->clsn, log);
940: if (difft > diffp)
941: mp->clsn = tblk->clsn;
942: } else
943: mp->clsn = tblk->clsn;
944: LOGSYNC_UNLOCK(log, flags);
945:
946: assert(!(tlck->flag & tlckFREEPAGE));
947:
948: put_metapage(mp);
949: }
950:
951: /* insert tlock, and linelock(s) of the tlock if any,
952: * at head of freelist
953: */
954: TXN_LOCK();
955:
956: llid = ((struct linelock *) & tlck->lock)->next;
957: while (llid) {
958: linelock = (struct linelock *) lid_to_tlock(llid);
959: k = linelock->next;
960: txLockFree(llid);
961: llid = k;
962: }
963: txLockFree(lid);
964:
965: TXN_UNLOCK();
966: }
967: tblk->next = tblk->last = 0;
968:
969: /*
970: * remove tblock from logsynclist
971: * (allocation map pages inherited lsn of tblk and
972: * has been inserted in logsync list at txUpdateMap())
973: */
974: if (tblk->lsn) {
975: LOGSYNC_LOCK(log, flags);
976: log->count--;
977: list_del(&tblk->synclist);
978: LOGSYNC_UNLOCK(log, flags);
979: }
980: }
SourceFile:fs/jfs/jfs_txnmgr.c] |
| 17/2 | 2026/03/16 15:14 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 13438 output: 47 thoughts: 45 Thoughts: Okay, here's my interpretation of that code snippet, delivered as if it's my own thought process: **Decoding txUnlock and Metapages** Alright, let's break this down. Line 948 in `txUnlock`... right, that's where `put_metapage(mp)` gets called. Now, `put_metapage`... yeah, I remember, it's either a macro or a function – doesn't really matter for this, it's the *action* it performs that's important. It's essentially a wrapper for, or a direct call to, `release_metapage`. So, the core functionality here is definitely about managing and releasing those metapages within the transaction unlock process. This suggests we're working with some form of memory management strategy, probably a buffer cache or something similar. We're releasing a resource, `mp`, which is probably a pointer to the relevant metadata associated with a particular segment or data block. We're marking it as available, I suppose, potentially back to the free list, or maybe invalidating any cached version. The location and sequence are key, so I have to determine why this is done during the transaction unlock process and the implications of this action. It makes sense, as a transaction needs to be finalized and the metadata is now free to be re-written, or reused. |
| 18/2 | 2026/03/16 15:15 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.h IncludeLines:true Name:put_metapage] Results: map[SourceCode: 851: void put_metapage(struct metapage *mp)
852: {
853: if (mp->count || mp->nohomeok) {
854: /* Someone else will release this */
855: folio_unlock(mp->folio);
856: return;
857: }
858: folio_get(mp->folio);
859: mp->count++;
860: lock_metapage(mp);
861: folio_unlock(mp->folio);
862: release_metapage(mp);
863: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 19/2 | 2026/03/16 15:15 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 14033 output: 48 thoughts: 228 Thoughts: **Investigating Potential JFS Metapage Write Issue** Okay, so I'm digging into this `put_metapage` function, right? Found it in `fs/jfs/jfs_metapage.c`. Got it. Now, it takes an `mp` argument – good to know. Looking at the call stack, I see it's called in `txUnlock`, specifically at line 930. They use `hold_metapage(mp)` first, then call `_metapage_homeok(mp)`, and finally *bam* `put_metapage(mp)`. Standard stuff, so far. But here's where it gets interesting. If `mp->folio->mapping` is NULL, that usually means the folio was truncated, right? Like, removed from the page cache. Makes sense. But if it's truncated, why is `test_bit(META_dirty, &mp->flag)` *true*? That's a red flag! And even more concerning, why is `metapage_write_one` even attempting to write this thing if it's supposedly gone? My intuition says that when a folio is truncated it should be gone from the page cache entirely. Its mapping should be set to NULL. But here we are. It looks like with JFS metapages, there's some kind of disconnect where the metapage might still be hanging onto the folio. Gotta check `jfs_invalidate_folio` or `jfs_release_folio`. Something is amiss. Let's see if those shed any light on how JFS handles folio removal in relation to metapages. |
| 20/2 | 2026/03/16 15:15 | tool | codesearch-definition-source |
0mError:requested entity does not exist Args: map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:jfs_invalidate_folio] Results: map[SourceCode: SourceFile:] |
| 21/2 | 2026/03/16 15:15 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 14682 output: 17 thoughts: 18 Thoughts: Okay, here's my summary, presented as a stream of thought: **Finding the `invalidate_folio` Function in the JFS Filesystem** Alright, time to get down to brass tacks. I need to find the implementation of `invalidate_folio`. Given my understanding of the kernel's architecture, and especially my familiarity with filesystems, my instinct says it's likely to be within the filesystem code itself. And specifically, the JFS filesystem code feels like a good place to start. So, let's target that. Let's start the search with the obvious: the `fs/jfs/` directory. That's the logical starting point. Let's see what that yields. I'll need to check the search results and figure out which one is the most likely candidate. |
| 22/2 | 2026/03/16 15:16 | tool | grepper |
0mArgs:map[Expression:invalidate_folio] Results: map[Output:
Full output is too long, showing 500 out of 651 lines.
Use more precise expression if possible.
[Documentation/filesystems/iomap/operations.rst=45=address space operations structure:
--
Documentation/filesystems/iomap/operations.rst-48- * ``iomap_release_folio``
Documentation/filesystems/iomap/operations.rst:49: * ``iomap_invalidate_folio``
Documentation/filesystems/iomap/operations.rst-50- * ``iomap_is_partially_uptodate``
--
Documentation/filesystems/locking.rst=245=prototypes::
--
Documentation/filesystems/locking.rst-257- sector_t (*bmap)(struct address_space *, sector_t);
Documentation/filesystems/locking.rst:258: void (*invalidate_folio) (struct folio *, size_t start, size_t len);
Documentation/filesystems/locking.rst-259- bool (*release_folio)(struct folio *, gfp_t);
--
Documentation/filesystems/locking.rst=283=bmap:
Documentation/filesystems/locking.rst:284:invalidate_folio: yes exclusive
Documentation/filesystems/locking.rst-285-release_folio: yes
--
Documentation/filesystems/locking.rst=323=keep it that way and don't breed new callers.
Documentation/filesystems/locking.rst-324-
Documentation/filesystems/locking.rst:325:->invalidate_folio() is called when the filesystem must attempt to drop
Documentation/filesystems/locking.rst-326-some or all of the buffers from the page when it is being truncated. It
--
Documentation/filesystems/locking.rst=328=invalidate_lock before invalidating page cache in truncate / hole punch
Documentation/filesystems/locking.rst:329:path (and thus calling into ->invalidate_folio) to block races between page
Documentation/filesystems/locking.rst-330-cache invalidation and page cache filling functions (fault, read, ...).
--
Documentation/filesystems/netfs_library.rst=460=for it to intervene at various times::
--
Documentation/filesystems/netfs_library.rst-466- bool netfs_dirty_folio(struct address_space *mapping, struct folio *folio);
Documentation/filesystems/netfs_library.rst:467: void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length);
Documentation/filesystems/netfs_library.rst-468- bool netfs_release_folio(struct folio *folio, gfp_t gfp);
--
Documentation/filesystems/vfs.rst=749=cache in your filesystem. The following members are defined:
--
Documentation/filesystems/vfs.rst-764- sector_t (*bmap)(struct address_space *, sector_t);
Documentation/filesystems/vfs.rst:765: void (*invalidate_folio) (struct folio *, size_t start, size_t len);
Documentation/filesystems/vfs.rst-766- bool (*release_folio)(struct folio *, gfp_t);
--
Documentation/filesystems/vfs.rst-891-
Documentation/filesystems/vfs.rst:892:``invalidate_folio``
Documentation/filesystems/vfs.rst:893: If a folio has private data, then invalidate_folio will be
Documentation/filesystems/vfs.rst-894- called when part or all of the folio is to be removed from the
--
block/fops.c=524=const struct address_space_operations def_blk_aops = {
block/fops.c-525- .dirty_folio = block_dirty_folio,
block/fops.c:526: .invalidate_folio = block_invalidate_folio,
block/fops.c-527- .read_folio = blkdev_read_folio,
--
block/fops.c=585=const struct address_space_operations def_blk_aops = {
--
block/fops.c-587- .release_folio = iomap_release_folio,
block/fops.c:588: .invalidate_folio = iomap_invalidate_folio,
block/fops.c-589- .read_folio = blkdev_read_folio,
--
fs/9p/vfs_addr.c=160=const struct address_space_operations v9fs_addr_operations = {
--
fs/9p/vfs_addr.c-164- .release_folio = netfs_release_folio,
fs/9p/vfs_addr.c:165: .invalidate_folio = netfs_invalidate_folio,
fs/9p/vfs_addr.c-166- .direct_IO = noop_direct_IO,
--
fs/adfs/inode.c=77=static const struct address_space_operations adfs_aops = {
fs/adfs/inode.c-78- .dirty_folio = block_dirty_folio,
fs/adfs/inode.c:79: .invalidate_folio = block_invalidate_folio,
fs/adfs/inode.c-80- .read_folio = adfs_read_folio,
--
fs/affs/file.c=459=const struct address_space_operations affs_aops = {
fs/affs/file.c-460- .dirty_folio = block_dirty_folio,
fs/affs/file.c:461: .invalidate_folio = block_invalidate_folio,
fs/affs/file.c-462- .read_folio = affs_read_folio,
--
fs/affs/file.c=840=const struct address_space_operations affs_aops_ofs = {
fs/affs/file.c-841- .dirty_folio = block_dirty_folio,
fs/affs/file.c:842: .invalidate_folio = block_invalidate_folio,
fs/affs/file.c-843- .read_folio = affs_read_folio_ofs,
--
fs/afs/file.c=52=const struct address_space_operations afs_file_aops = {
--
fs/afs/file.c-57- .release_folio = netfs_release_folio,
fs/afs/file.c:58: .invalidate_folio = netfs_invalidate_folio,
fs/afs/file.c-59- .migrate_folio = filemap_migrate_folio,
--
fs/bfs/file.c=192=const struct address_space_operations bfs_aops = {
fs/bfs/file.c-193- .dirty_folio = block_dirty_folio,
fs/bfs/file.c:194: .invalidate_folio = block_invalidate_folio,
fs/bfs/file.c-195- .read_folio = bfs_read_folio,
--
fs/btrfs/disk-io.c=480=static bool btree_release_folio(struct folio *folio, gfp_t gfp_flags)
--
fs/btrfs/disk-io.c-487-
fs/btrfs/disk-io.c:488:static void btree_invalidate_folio(struct folio *folio, size_t offset,
fs/btrfs/disk-io.c-489- size_t length)
--
fs/btrfs/disk-io.c-493- tree = &folio_to_inode(folio)->io_tree;
fs/btrfs/disk-io.c:494: extent_invalidate_folio(tree, folio, offset);
fs/btrfs/disk-io.c-495- btree_release_folio(folio, GFP_NOFS);
--
fs/btrfs/disk-io.c=556=static const struct address_space_operations btree_aops = {
--
fs/btrfs/disk-io.c-558- .release_folio = btree_release_folio,
fs/btrfs/disk-io.c:559: .invalidate_folio = btree_invalidate_folio,
fs/btrfs/disk-io.c-560- .migrate_folio = btree_migrate_folio,
--
fs/btrfs/extent_io.c=2719=void btrfs_readahead(struct readahead_control *rac)
--
fs/btrfs/extent_io.c-2748-/*
fs/btrfs/extent_io.c:2749: * basic invalidate_folio code, this waits on any locked or writeback
fs/btrfs/extent_io.c-2750- * ranges corresponding to the folio, and then deletes any extent state
--
fs/btrfs/extent_io.c-2752- */
fs/btrfs/extent_io.c:2753:int extent_invalidate_folio(struct extent_io_tree *tree,
fs/btrfs/extent_io.c-2754- struct folio *folio, size_t offset)
--
fs/btrfs/extent_io.h=356=void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end,
--
fs/btrfs/extent_io.h-359- u32 bits_to_clear, unsigned long page_ops);
fs/btrfs/extent_io.h:360:int extent_invalidate_folio(struct extent_io_tree *tree,
fs/btrfs/extent_io.h-361- struct folio *folio, size_t offset);
--
fs/btrfs/inode.c=5465=static int btrfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
--
fs/btrfs/inode.c-5498- * While truncating the inode pages during eviction, we get the VFS
fs/btrfs/inode.c:5499: * calling btrfs_invalidate_folio() against each folio of the inode. This
fs/btrfs/inode.c:5500: * is slow because the calls to btrfs_invalidate_folio() result in a
fs/btrfs/inode.c-5501- * huge amount of calls to lock_extent() and clear_extent_bit(),
--
fs/btrfs/inode.c-5504- *
fs/btrfs/inode.c:5505: * Therefore if the inode is being evicted, let btrfs_invalidate_folio()
fs/btrfs/inode.c-5506- * skip all those expensive operations on a per folio basis and do only
--
fs/btrfs/inode.c=5510=static void evict_inode_truncate_pages(struct inode *inode)
--
fs/btrfs/inode.c-5556- * So we need to free its reserved space here.
fs/btrfs/inode.c:5557: * (Refer to comment in btrfs_invalidate_folio, case 2)
fs/btrfs/inode.c-5558- *
--
fs/btrfs/inode.c=7512=struct extent_map *btrfs_create_io_em(struct btrfs_inode *inode, u64 start,
--
fs/btrfs/inode.c-7581-/*
fs/btrfs/inode.c:7582: * For release_folio() and invalidate_folio() we have a race window where
fs/btrfs/inode.c-7583- * folio_end_writeback() is called but the subpage spinlock is not yet released.
--
fs/btrfs/inode.c=7638=static int btrfs_migrate_folio(struct address_space *mapping,
--
fs/btrfs/inode.c-7657-
fs/btrfs/inode.c:7658:static void btrfs_invalidate_folio(struct folio *folio, size_t offset,
fs/btrfs/inode.c-7659- size_t length)
--
fs/btrfs/inode.c-7676- * already cleared, so it's possible for endio and
fs/btrfs/inode.c:7677: * invalidate_folio to do the same ordered extent accounting twice
fs/btrfs/inode.c-7678- * on one folio.
--
fs/btrfs/inode.c=10714=static const struct address_space_operations btrfs_aops = {
--
fs/btrfs/inode.c-10717- .readahead = btrfs_readahead,
fs/btrfs/inode.c:10718: .invalidate_folio = btrfs_invalidate_folio,
fs/btrfs/inode.c-10719- .launder_folio = btrfs_launder_folio,
--
fs/buffer.c=1599=static void discard_buffer(struct buffer_head * bh)
--
fs/buffer.c-1613-/**
fs/buffer.c:1614: * block_invalidate_folio - Invalidate part or all of a buffer-backed folio.
fs/buffer.c-1615- * @folio: The folio which is affected.
--
fs/buffer.c-1618- *
fs/buffer.c:1619: * block_invalidate_folio() is called when all or part of the folio has been
fs/buffer.c-1620- * invalidated by a truncate operation.
fs/buffer.c-1621- *
fs/buffer.c:1622: * block_invalidate_folio() does not have to release all buffers, but it must
fs/buffer.c-1623- * ensure that no dirty buffer is left outside @offset and that no I/O
--
fs/buffer.c-1627- */
fs/buffer.c:1628:void block_invalidate_folio(struct folio *folio, size_t offset, size_t length)
fs/buffer.c-1629-{
--
fs/buffer.c-1674-}
fs/buffer.c:1675:EXPORT_SYMBOL(block_invalidate_folio);
fs/buffer.c-1676-
--
fs/ceph/addr.c=81=static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio)
--
fs/ceph/addr.c-126- * Reference snap context in folio->private. Also set
fs/ceph/addr.c:127: * PagePrivate so that we get invalidate_folio callback.
fs/ceph/addr.c-128- */
--
fs/ceph/addr.c-139- */
fs/ceph/addr.c:140:static void ceph_invalidate_folio(struct folio *folio, size_t offset,
fs/ceph/addr.c-141- size_t length)
--
fs/ceph/addr.c-164-
fs/ceph/addr.c:165: netfs_invalidate_folio(folio, offset, length);
fs/ceph/addr.c-166-}
--
fs/ceph/addr.c=1919=const struct address_space_operations ceph_aops = {
--
fs/ceph/addr.c-1925- .dirty_folio = ceph_dirty_folio,
fs/ceph/addr.c:1926: .invalidate_folio = ceph_invalidate_folio,
fs/ceph/addr.c-1927- .release_folio = netfs_release_folio,
--
fs/ecryptfs/mmap.c=515=const struct address_space_operations ecryptfs_aops = {
--
fs/ecryptfs/mmap.c-525- .dirty_folio = block_dirty_folio,
fs/ecryptfs/mmap.c:526: .invalidate_folio = block_invalidate_folio,
fs/ecryptfs/mmap.c-527-#endif
--
fs/erofs/data.c=445=const struct address_space_operations erofs_aops = {
--
fs/erofs/data.c-450- .release_folio = iomap_release_folio,
fs/erofs/data.c:451: .invalidate_folio = iomap_invalidate_folio,
fs/erofs/data.c-452-};
--
fs/erofs/zdata.c=618=static bool z_erofs_cache_release_folio(struct folio *folio, gfp_t gfp)
--
fs/erofs/zdata.c-649- */
fs/erofs/zdata.c:650:static void z_erofs_cache_invalidate_folio(struct folio *folio,
fs/erofs/zdata.c-651- size_t offset, size_t length)
--
fs/erofs/zdata.c=663=static const struct address_space_operations z_erofs_cache_aops = {
fs/erofs/zdata.c-664- .release_folio = z_erofs_cache_release_folio,
fs/erofs/zdata.c:665: .invalidate_folio = z_erofs_cache_invalidate_folio,
fs/erofs/zdata.c-666-};
--
fs/exfat/inode.c=544=static const struct address_space_operations exfat_aops = {
fs/exfat/inode.c-545- .dirty_folio = block_dirty_folio,
fs/exfat/inode.c:546: .invalidate_folio = block_invalidate_folio,
fs/exfat/inode.c-547- .read_folio = exfat_read_folio,
--
fs/ext2/inode.c=971=const struct address_space_operations ext2_aops = {
fs/ext2/inode.c-972- .dirty_folio = block_dirty_folio,
fs/ext2/inode.c:973: .invalidate_folio = block_invalidate_folio,
fs/ext2/inode.c-974- .read_folio = ext2_read_folio,
--
fs/ext4/inode.c=1705=static void mpage_release_unused_pages(struct mpage_da_data *mpd,
--
fs/ext4/inode.c-1751- folio_clear_dirty_for_io(folio);
fs/ext4/inode.c:1752: block_invalidate_folio(folio, 0,
fs/ext4/inode.c-1753- folio_size(folio));
--
fs/ext4/inode.c=3345=static sector_t ext4_bmap(struct address_space *mapping, sector_t block)
--
fs/ext4/inode.c-3374-
fs/ext4/inode.c:3375:static void ext4_invalidate_folio(struct folio *folio, size_t offset,
fs/ext4/inode.c-3376- size_t length)
fs/ext4/inode.c-3377-{
fs/ext4/inode.c:3378: trace_ext4_invalidate_folio(folio, offset, length);
fs/ext4/inode.c-3379-
--
fs/ext4/inode.c-3382-
fs/ext4/inode.c:3383: block_invalidate_folio(folio, offset, length);
fs/ext4/inode.c-3384-}
fs/ext4/inode.c-3385-
fs/ext4/inode.c:3386:static int __ext4_journalled_invalidate_folio(struct folio *folio,
fs/ext4/inode.c-3387- size_t offset, size_t length)
--
fs/ext4/inode.c-3390-
fs/ext4/inode.c:3391: trace_ext4_journalled_invalidate_folio(folio, offset, length);
fs/ext4/inode.c-3392-
--
fs/ext4/inode.c-3398-
fs/ext4/inode.c:3399: return jbd2_journal_invalidate_folio(journal, folio, offset, length);
fs/ext4/inode.c-3400-}
--
fs/ext4/inode.c-3402-/* Wrapper for aops... */
fs/ext4/inode.c:3403:static void ext4_journalled_invalidate_folio(struct folio *folio,
fs/ext4/inode.c-3404- size_t offset,
--
fs/ext4/inode.c-3406-{
fs/ext4/inode.c:3407: WARN_ON(__ext4_journalled_invalidate_folio(folio, offset, length) < 0);
fs/ext4/inode.c-3408-}
--
fs/ext4/inode.c=3924=static const struct address_space_operations ext4_aops = {
--
fs/ext4/inode.c-3931- .bmap = ext4_bmap,
fs/ext4/inode.c:3932: .invalidate_folio = ext4_invalidate_folio,
fs/ext4/inode.c-3933- .release_folio = ext4_release_folio,
--
fs/ext4/inode.c=3940=static const struct address_space_operations ext4_journalled_aops = {
--
fs/ext4/inode.c-3947- .bmap = ext4_bmap,
fs/ext4/inode.c:3948: .invalidate_folio = ext4_journalled_invalidate_folio,
fs/ext4/inode.c-3949- .release_folio = ext4_release_folio,
--
fs/ext4/inode.c=3956=static const struct address_space_operations ext4_da_aops = {
--
fs/ext4/inode.c-3963- .bmap = ext4_bmap,
fs/ext4/inode.c:3964: .invalidate_folio = ext4_invalidate_folio,
fs/ext4/inode.c-3965- .release_folio = ext4_release_folio,
--
fs/ext4/inode.c=5639=int ext4_write_inode(struct inode *inode, struct writeback_control *wbc)
--
fs/ext4/inode.c-5689-/*
fs/ext4/inode.c:5690: * In data=journal mode ext4_journalled_invalidate_folio() may fail to invalidate
fs/ext4/inode.c-5691- * buffers that are attached to a folio straddling i_size and are undergoing
--
fs/ext4/inode.c=5694=static void ext4_wait_for_tail_page_commit(struct inode *inode)
--
fs/ext4/inode.c-5704- * If the folio is fully truncated, we don't need to wait for any commit
fs/ext4/inode.c:5705: * (and we even should not as __ext4_journalled_invalidate_folio() may
fs/ext4/inode.c-5706- * strip all buffers from the folio but keep the folio dirty which can then
--
fs/ext4/inode.c-5718- return;
fs/ext4/inode.c:5719: ret = __ext4_journalled_invalidate_folio(folio, offset,
fs/ext4/inode.c-5720- folio_size(folio) - offset);
--
fs/f2fs/checkpoint.c=701=const struct address_space_operations f2fs_meta_aops = {
--
fs/f2fs/checkpoint.c-703- .dirty_folio = f2fs_dirty_meta_folio,
fs/f2fs/checkpoint.c:704: .invalidate_folio = f2fs_invalidate_folio,
fs/f2fs/checkpoint.c-705- .release_folio = f2fs_release_folio,
--
fs/f2fs/compress.c=1909=const struct address_space_operations f2fs_compress_aops = {
fs/f2fs/compress.c-1910- .release_folio = f2fs_release_folio,
fs/f2fs/compress.c:1911: .invalidate_folio = f2fs_invalidate_folio,
fs/f2fs/compress.c-1912- .migrate_folio = filemap_migrate_folio,
--
fs/f2fs/data.c=3986=static int f2fs_write_end(const struct kiocb *iocb,
--
fs/f2fs/data.c-4040-
fs/f2fs/data.c:4041:void f2fs_invalidate_folio(struct folio *folio, size_t offset, size_t length)
fs/f2fs/data.c-4042-{
--
fs/f2fs/data.c=4405=const struct address_space_operations f2fs_dblock_aops = {
--
fs/f2fs/data.c-4412- .migrate_folio = filemap_migrate_folio,
fs/f2fs/data.c:4413: .invalidate_folio = f2fs_invalidate_folio,
fs/f2fs/data.c-4414- .release_folio = f2fs_release_folio,
--
fs/f2fs/f2fs.h=4184=void f2fs_write_failed(struct inode *inode, loff_t to);
fs/f2fs/f2fs.h:4185:void f2fs_invalidate_folio(struct folio *folio, size_t offset, size_t length);
fs/f2fs/f2fs.h-4186-bool f2fs_release_folio(struct folio *folio, gfp_t wait);
--
fs/f2fs/node.c=2308=const struct address_space_operations f2fs_node_aops = {
--
fs/f2fs/node.c-2310- .dirty_folio = f2fs_dirty_node_folio,
fs/f2fs/node.c:2311: .invalidate_folio = f2fs_invalidate_folio,
fs/f2fs/node.c-2312- .release_folio = f2fs_release_folio,
--
fs/fat/inode.c=340=static const struct address_space_operations fat_aops = {
fs/fat/inode.c-341- .dirty_folio = block_dirty_folio,
fs/fat/inode.c:342: .invalidate_folio = block_invalidate_folio,
fs/fat/inode.c-343- .read_folio = fat_read_folio,
--
fs/fuse/file.c=3183=static const struct address_space_operations fuse_file_aops = {
--
fs/fuse/file.c-3189- .release_folio = iomap_release_folio,
fs/fuse/file.c:3190: .invalidate_folio = iomap_invalidate_folio,
fs/fuse/file.c-3191- .is_partially_uptodate = iomap_is_partially_uptodate,
--
fs/gfs2/aops.c=580=static void gfs2_discard(struct gfs2_sbd *sdp, struct buffer_head *bh)
--
fs/gfs2/aops.c-604-
fs/gfs2/aops.c:605:static void gfs2_invalidate_folio(struct folio *folio, size_t offset,
fs/gfs2/aops.c-606- size_t length)
--
fs/gfs2/aops.c=710=static const struct address_space_operations gfs2_aops = {
--
fs/gfs2/aops.c-715- .release_folio = iomap_release_folio,
fs/gfs2/aops.c:716: .invalidate_folio = iomap_invalidate_folio,
fs/gfs2/aops.c-717- .bmap = gfs2_bmap,
--
fs/gfs2/aops.c=723=static const struct address_space_operations gfs2_jdata_aops = {
--
fs/gfs2/aops.c-729- .migrate_folio = buffer_migrate_folio,
fs/gfs2/aops.c:730: .invalidate_folio = gfs2_invalidate_folio,
fs/gfs2/aops.c-731- .release_folio = gfs2_release_folio,
--
fs/gfs2/meta_io.c=101=const struct address_space_operations gfs2_meta_aops = {
fs/gfs2/meta_io.c-102- .dirty_folio = block_dirty_folio,
fs/gfs2/meta_io.c:103: .invalidate_folio = block_invalidate_folio,
fs/gfs2/meta_io.c-104- .writepages = gfs2_aspace_writepages,
--
fs/gfs2/meta_io.c=109=const struct address_space_operations gfs2_rgrp_aops = {
fs/gfs2/meta_io.c-110- .dirty_folio = block_dirty_folio,
fs/gfs2/meta_io.c:111: .invalidate_folio = block_invalidate_folio,
fs/gfs2/meta_io.c-112- .writepages = gfs2_aspace_writepages,
--
fs/hfs/inode.c=156=const struct address_space_operations hfs_btree_aops = {
fs/hfs/inode.c-157- .dirty_folio = block_dirty_folio,
fs/hfs/inode.c:158: .invalidate_folio = block_invalidate_folio,
fs/hfs/inode.c-159- .read_folio = hfs_read_folio,
--
fs/hfs/inode.c=168=const struct address_space_operations hfs_aops = {
fs/hfs/inode.c-169- .dirty_folio = block_dirty_folio,
fs/hfs/inode.c:170: .invalidate_folio = block_invalidate_folio,
fs/hfs/inode.c-171- .read_folio = hfs_read_folio,
--
fs/hfsplus/inode.c=154=const struct address_space_operations hfsplus_btree_aops = {
fs/hfsplus/inode.c-155- .dirty_folio = block_dirty_folio,
fs/hfsplus/inode.c:156: .invalidate_folio = block_invalidate_folio,
fs/hfsplus/inode.c-157- .read_folio = hfsplus_read_folio,
--
fs/hfsplus/inode.c=166=const struct address_space_operations hfsplus_aops = {
fs/hfsplus/inode.c-167- .dirty_folio = block_dirty_folio,
fs/hfsplus/inode.c:168: .invalidate_folio = block_invalidate_folio,
fs/hfsplus/inode.c-169- .read_folio = hfsplus_read_folio,
--
fs/hpfs/file.c=245=const struct address_space_operations hpfs_aops = {
fs/hpfs/file.c-246- .dirty_folio = block_dirty_folio,
fs/hpfs/file.c:247: .invalidate_folio = block_invalidate_folio,
fs/hpfs/file.c-248- .read_folio = hpfs_read_folio,
--
fs/iomap/buffered-io.c=740=EXPORT_SYMBOL_GPL(iomap_release_folio);
fs/iomap/buffered-io.c-741-
fs/iomap/buffered-io.c:742:void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len)
fs/iomap/buffered-io.c-743-{
fs/iomap/buffered-io.c:744: trace_iomap_invalidate_folio(folio->mapping->host,
fs/iomap/buffered-io.c-745- folio_pos(folio) + offset, len);
--
fs/iomap/buffered-io.c-756-}
fs/iomap/buffered-io.c:757:EXPORT_SYMBOL_GPL(iomap_invalidate_folio);
fs/iomap/buffered-io.c-758-
--
fs/iomap/trace.h=83=DEFINE_RANGE_EVENT(iomap_release_folio);
fs/iomap/trace.h:84:DEFINE_RANGE_EVENT(iomap_invalidate_folio);
fs/iomap/trace.h-85-DEFINE_RANGE_EVENT(iomap_dio_invalidate_fail);
--
fs/jbd2/journal.c=86=EXPORT_SYMBOL(jbd2_journal_blocks_per_folio);
fs/jbd2/journal.c:87:EXPORT_SYMBOL(jbd2_journal_invalidate_folio);
fs/jbd2/journal.c-88-EXPORT_SYMBOL(jbd2_journal_try_to_free_buffers);
--
fs/jbd2/transaction.c=2181=static int __dispose_buffer(struct journal_head *jh, transaction_t *transaction)
--
fs/jbd2/transaction.c-2205-/*
fs/jbd2/transaction.c:2206: * jbd2_journal_invalidate_folio
fs/jbd2/transaction.c-2207- *
--
fs/jbd2/transaction.c-2211- *
fs/jbd2/transaction.c:2212: * i_size must be updated on disk before we start calling invalidate_folio
fs/jbd2/transaction.c-2213- * on the data.
--
fs/jbd2/transaction.c=2252=static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh,
--
fs/jbd2/transaction.c-2417-/**
fs/jbd2/transaction.c:2418: * jbd2_journal_invalidate_folio()
fs/jbd2/transaction.c-2419- * @journal: journal to use for flush...
--
fs/jbd2/transaction.c-2428- */
fs/jbd2/transaction.c:2429:int jbd2_journal_invalidate_folio(journal_t *journal, struct folio *folio,
fs/jbd2/transaction.c-2430- size_t offset, size_t length)
--
fs/jfs/inode.c=356=const struct address_space_operations jfs_aops = {
fs/jfs/inode.c-357- .dirty_folio = block_dirty_folio,
fs/jfs/inode.c:358: .invalidate_folio = block_invalidate_folio,
fs/jfs/inode.c-359- .read_folio = jfs_read_folio,
--
fs/jfs/jfs_metapage.c=645=static int metapage_migrate_folio(struct address_space *mapping,
--
fs/jfs/jfs_metapage.c-663-
fs/jfs/jfs_metapage.c:664:static void metapage_invalidate_folio(struct folio *folio, size_t offset,
fs/jfs/jfs_metapage.c-665- size_t length)
--
fs/jfs/jfs_metapage.c=674=const struct address_space_operations jfs_metapage_aops = {
--
fs/jfs/jfs_metapage.c-677- .release_folio = metapage_release_folio,
fs/jfs/jfs_metapage.c:678: .invalidate_folio = metapage_invalidate_folio,
fs/jfs/jfs_metapage.c-679- .dirty_folio = filemap_dirty_folio,
--
fs/minix/inode.c=488=static const struct address_space_operations minix_aops = {
fs/minix/inode.c-489- .dirty_folio = block_dirty_folio,
fs/minix/inode.c:490: .invalidate_folio = block_invalidate_folio,
fs/minix/inode.c-491- .read_folio = minix_read_folio,
--
fs/netfs/misc.c=200=EXPORT_SYMBOL(netfs_clear_inode_writeback);
--
fs/netfs/misc.c-202-/**
fs/netfs/misc.c:203: * netfs_invalidate_folio - Invalidate or partially invalidate a folio
fs/netfs/misc.c-204- * @folio: Folio proposed for release
--
fs/netfs/misc.c-210- */
fs/netfs/misc.c:211:void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length)
fs/netfs/misc.c-212-{
--
fs/netfs/misc.c-282-}
fs/netfs/misc.c:283:EXPORT_SYMBOL(netfs_invalidate_folio);
fs/netfs/misc.c-284-
--
fs/nfs/file.c=424=static int nfs_write_end(const struct kiocb *iocb,
--
fs/nfs/file.c-482- */
fs/nfs/file.c:483:static void nfs_invalidate_folio(struct folio *folio, size_t offset,
fs/nfs/file.c-484- size_t length)
--
fs/nfs/file.c-486- struct inode *inode = folio->mapping->host;
fs/nfs/file.c:487: dfprintk(PAGECACHE, "NFS: invalidate_folio(%lu, %zu, %zu)\n",
fs/nfs/file.c-488- folio->index, offset, length);
--
fs/nfs/file.c-495- folio_wait_private_2(folio); /* [DEPRECATED] */
fs/nfs/file.c:496: trace_nfs_invalidate_folio(inode, folio_pos(folio) + offset, length);
fs/nfs/file.c-497-}
--
fs/nfs/file.c=618=const struct address_space_operations nfs_file_aops = {
--
fs/nfs/file.c-624- .write_end = nfs_write_end,
fs/nfs/file.c:625: .invalidate_folio = nfs_invalidate_folio,
fs/nfs/file.c-626- .release_folio = nfs_release_folio,
--
fs/nfs/nfstrace.h=1069=DEFINE_NFS_FOLIO_EVENT_DONE(nfs_writeback_folio_done);
fs/nfs/nfstrace.h-1070-
fs/nfs/nfstrace.h:1071:DEFINE_NFS_FOLIO_EVENT(nfs_invalidate_folio);
fs/nfs/nfstrace.h-1072-DEFINE_NFS_FOLIO_EVENT_DONE(nfs_launder_folio_done);
--
fs/nilfs2/inode.c=272=const struct address_space_operations nilfs_aops = {
]
] |
| 23/2 | 2026/03/16 15:16 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 25196 output: 49 thoughts: 32 Thoughts: Okay, here's the summary, expressed as though I'm the one having the thought: **Finding `metapage_invalidate_folio`** Right, let me see... I'm looking for the function `metapage_invalidate_folio`. Got it. Looks like it's defined within the JFS filesystem code. Specifically, I can find it in `fs/jfs/jfs_metapage.c`, and it's located at line 664. That's good to know; at least I have a starting point to dive deeper and analyze what the function does in its specific context. |
| 24/2 | 2026/03/16 15:16 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:metapage_invalidate_folio] Results: map[SourceCode: 664: static void metapage_invalidate_folio(struct folio *folio, size_t offset,
665: size_t length)
666: {
667: BUG_ON(offset || length < folio_size(folio));
668:
669: BUG_ON(folio_test_writeback(folio));
670:
671: metapage_release_folio(folio, 0);
672: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 25/2 | 2026/03/16 15:16 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 25578 output: 49 thoughts: 10 Thoughts: Okay, here's my summary, presented as a first-person account reflecting on the provided text, assuming the role of an expert in the field: **Reflecting on `metapage_release_folio` and its Implications** Alright, let's break this down. When I see mention of `metapage_release_folio`, my mind immediately goes to the architectural details and the specific implications within the system's design. This isn't just a function name; it's a critical point in the memory management lifecycle, undoubtedly tied to how we handle metadata and potential resource reclamation. My first thought is around concurrency and thread safety. Is this operation atomic? Does it involve locks? If it's not handled carefully, we're likely looking at data corruption or subtle race conditions that could manifest as intermittent performance hits or, worse, hard-to-debug crashes. I need to know how it interacts with other processes accessing shared data structures. Are we talking about a simple counter decrement, or is there a more complex deallocation cascade involved? Then I consider the *why* of the "release". What signals the need for this folio to be released? Is it triggered by explicit user action, system cleanup, or perhaps a resource threshold? Understanding the trigger is crucial. This helps determine how this impacts the design. It impacts things like performance, memory consumption, and responsiveness. The "meta" aspect is also key. Metadata, in this context, suggests information *about* data, not the raw data itself. Thus, I'm thinking about the performance impact of releasing the folio. Does this involve updating indexes, invalidating caches, or potentially, notifying other parts of the system about this change? How much overhead is there? If this is a frequent operation, performance needs to be a primary consideration. The "folio" part of this function suggests pages of data, but this could be a virtual space or an object representing metadata, so I must assess the data structures that come into play here. Furthermore, how does this relate to other memory management routines? Is it part of a larger system, perhaps related to a garbage collector, a reference counting system, or something more specialized to this architecture? I need to understand its position within the broader context of the memory model. Finally, I immediately start to evaluate the potential for errors. What are the preconditions for this function to execute correctly? What are the failure modes? Are there checks in place to prevent double-frees or leaks? I'm asking these questions because I've seen these types of operations go sideways in the past. Detailed logging and robust error handling are essential. Overall, `metapage_release_folio` is more than a simple function; it's a pivotal moment in the system's runtime behavior. It's a point I would want to dissect for performance, correctness, and security. A solid understanding of *this* is essential for building a robust system. |
| 26/2 | 2026/03/16 15:17 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:metapage_release_folio] Results: map[SourceCode: 612: static bool metapage_release_folio(struct folio *folio, gfp_t gfp_mask)
613: {
614: struct metapage *mp;
615: bool ret = true;
616: int offset;
617:
618: for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
619: mp = folio_to_mp(folio, offset);
620:
621: if (!mp)
622: continue;
623:
624: jfs_info("metapage_release_folio: mp = 0x%p", mp);
625: if (mp->count || mp->nohomeok ||
626: test_bit(META_dirty, &mp->flag)) {
627: jfs_info("count = %ld, nohomeok = %d", mp->count,
628: mp->nohomeok);
629: ret = false;
630: continue;
631: }
632: if (mp->lsn)
633: remove_from_logsync(mp);
634: remove_metapage(folio, mp);
635: INCREMENT(mpStat.pagefree);
636: free_metapage(mp);
637: }
638: return ret;
639: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 27/2 | 2026/03/16 15:17 | llm | debugger |
2mModel:gemini-3.1-pro-preview Tokens: input: 26685 output: 555 thoughts: 676 Thoughts: **Tracing a Metapage Release and Writeback Bug** Okay, let's break this down. My initial thought is that if `metapage_release_folio` returns false, it means the folio isn't released. That's a key observation. Then, I remember `metapage_invalidate_folio` calls `metapage_release_folio` with a zero flag. I have to see how this works, because, if `metapage_release_folio` returns false, it just exits, so how is it that `metapage_invalidate_folio` returns `void`? That seems important. It looks like `invalidate_folio` *doesn't* prevent the folio from being removed from the page cache. Aha, but if it *doesn't* free the private data, the folio is still removed if it's being truncated! Interesting. Let me consider how `truncate_cleanup_folio` and `filemap_free_folio` fit into this. Actually, `invalidate_folio` *is* called, and then the folio *is* removed from the page cache. Got it. So, if `invalidate_folio` is called, the folio comes out of the cache and, critically, its `mapping` pointer gets set to `NULL`. But wait… the `metapage` itself is *still* attached to the folio. And, the `metapage` is still in the `tlock` list because either `mp->nohomeok` or `mp->count` was non-zero, or it was dirty. Okay, I follow the logic here. Later, `txUnlock` is called. This triggers `put_metapage(mp)`. `put_metapage` calls `release_metapage(mp)`. Now, `release_metapage` sees `test_bit(META_dirty, &mp->flag)` and `test_bit(META_sync, &mp->flag)` are both true. So, `metapage_write_one(folio)` is called. This leads to `metapage_write_folio(folio, &wbc)`. The crucial point… `metapage_write_folio` then dereferences `folio->mapping->host`. And *that* is NULL because the folio was truncated. Bingo! I think I've found the root cause. This confirms the crash I'm seeing. Let's double-check by confirming `folio->mapping` *is* indeed NULL at that point. The crash backtrace shows the relevant line in `metapage_write_folio`. I'll cross-reference the assembly with the C code to be sure. It looks like my initial hypothesis is correct, and I can now confirm it with the disassembly. |
| 28/2 | 2026/03/16 15:19 | tool | codeexpert |
2mArgs:map[Question:In fs/jfs/jfs_metapage.c:428, `struct inode *inode = folio->mapping->host;` is executed. The crash happens at `cmpb $0x0,(%rax,%r15,1)` where `%rax` is 0, which means it's checking KASAN shadow for address 0. The assembly is: 5: 4d 8d 74 24 18 lea 0x18(%r12),%r14 a: 4c 89 f0 mov %r14,%rax d: 48 c1 e8 03 shr $0x3,%rax 11: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1) 16: 74 08 je 0x20 18: 4c 89 f7 mov %r14,%rdi 1b: e8 82 91 dc fe call 0xfedc91a2 20: 4d 8b 36 mov (%r14),%r14 23: 4c 89 f0 mov %r14,%rax 26: 48 c1 e8 03 shr $0x3,%rax * 2a: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1) <-- trapping instruction 2f: 74 08 je 0x39 31: 4c 89 f7 mov %r14,%rdi 34: e8 69 91 dc fe call 0xfedc91a2 39: 49 8b 1e mov (%r14),%rbx Can you confirm if `%r12` is `folio` and `0x18(%r12)` is `folio->mapping`? And the crash is dereferencing `folio->mapping` which is NULL?] Results: map[Answer:Based on the analysis of the source code and the provided assembly, I can confirm your suspicions.
### 1. Register `%r12` and Offset `0x18`
In the Linux kernel's `struct folio` (and the underlying `struct page`), the `mapping` field is indeed located at offset **24 (0x18)** on 64-bit systems.
The structure layout is as follows:
* **Offset 0 (0x00):** `unsigned long flags` (8 bytes)
* **Offset 8 (0x08):** `union { struct list_head lru; ... }` (16 bytes)
* **Offset 24 (0x18):** `struct address_space *mapping` (8 bytes)
In the assembly snippet:
```assembly
5: 4d 8d 74 24 18 lea 0x18(%r12),%r14
```
`%r14` is loaded with the address of `folio->mapping` (i.e., `&folio->mapping`). This confirms that **`%r12` is the `folio` pointer**.
### 2. The Crash and NULL `folio->mapping`
The assembly shows a standard KASAN (Kernel Address Sanitizer) instrumentation sequence for a pointer dereference:
1. **Check `&folio->mapping`:**
```assembly
a: 4c 89 f0 mov %r14,%rax
d: 48 c1 e8 03 shr $0x3,%rax
11: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1)
```
This checks if the memory containing the `mapping` pointer itself is accessible.
2. **Load `folio->mapping`:**
```assembly
20: 4d 8b 36 mov (%r14),%r14
```
`%r14` now holds the value of `folio->mapping`.
3. **Check `folio->mapping` (The Trapping Instruction):**
```assembly
23: 4c 89 f0 mov %r14,%rax
26: 48 c1 e8 03 shr $0x3,%rax
2a: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1) <-- trapping instruction
```
This is the KASAN check for the address stored in `%r14` (the `mapping` pointer) before it is dereferenced. You noted that `%rax` is 0 here. Since `%rax` is derived from `%r14 >> 3`, this means **`%r14` (the `mapping` pointer) is NULL (0)**.
4. **Dereference `folio->mapping->host`:**
The instruction at offset `39` would have been the actual dereference:
```assembly
39: 49 8b 1e mov (%r14),%rbx
```
Since `host` is the first member of `struct address_space` (offset 0), `folio->mapping->host` is equivalent to `*folio->mapping`.
### Conclusion
The crash is indeed caused by **`folio->mapping` being NULL**. KASAN detects the attempt to dereference a NULL pointer at line 428 (in `metapage_write_folio`) and traps at the shadow memory check.
In JFS, this can happen if a metapage is being released or forced while its underlying folio has already been truncated or removed from the page cache (which sets `folio->mapping` to NULL). Specifically, if `release_metapage` is called on a dirty metapage after the folio has been invalidated, it attempts to write the page via `metapage_write_one` -> `metapage_write_folio`, leading to this NULL dereference.] |
| 29/3 | 2026/03/16 15:19 | agent | codeexpert |
2mModel:gemini-3-flash-preview Instruction: You are a capable Linux kernel developer tasked with researching complex questions about kernel source code. You will be given a concrete question, and need to provide a concrete answer. Use tools extensively while researching the question. Don't make assumptions, or rely on your previous knowledge about the kernel source code, use available tools to access the actual source code. Use all available sources of information: - kernel source code - documentation in the Documentation dir in the source tree - git commits descriptions, git blame Prefer calling several tools at the same time to save round-trips. Prompt: In fs/jfs/jfs_metapage.c:428, `struct inode *inode = folio->mapping->host;` is executed. The crash happens at `cmpb $0x0,(%rax,%r15,1)` where `%rax` is 0, which means it's checking KASAN shadow for address 0. The assembly is: 5: 4d 8d 74 24 18 lea 0x18(%r12),%r14 a: 4c 89 f0 mov %r14,%rax d: 48 c1 e8 03 shr $0x3,%rax 11: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1) 16: 74 08 je 0x20 18: 4c 89 f7 mov %r14,%rdi 1b: e8 82 91 dc fe call 0xfedc91a2 20: 4d 8b 36 mov (%r14),%r14 23: 4c 89 f0 mov %r14,%rax 26: 48 c1 e8 03 shr $0x3,%rax * 2a: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1) <-- trapping instruction 2f: 74 08 je 0x39 31: 4c 89 f7 mov %r14,%rdi 34: e8 69 91 dc fe call 0xfedc91a2 39: 49 8b 1e mov (%r14),%rbx Can you confirm if `%r12` is `folio` and `0x18(%r12)` is `folio->mapping`? And the crash is dereferencing `folio->mapping` which is NULL? Reply: Based on the analysis of the source code and the provided assembly, I can confirm your suspicions.
### 1. Register `%r12` and Offset `0x18`
In the Linux kernel's `struct folio` (and the underlying `struct page`), the `mapping` field is indeed located at offset **24 (0x18)** on 64-bit systems.
The structure layout is as follows:
* **Offset 0 (0x00):** `unsigned long flags` (8 bytes)
* **Offset 8 (0x08):** `union { struct list_head lru; ... }` (16 bytes)
* **Offset 24 (0x18):** `struct address_space *mapping` (8 bytes)
In the assembly snippet:
```assembly
5: 4d 8d 74 24 18 lea 0x18(%r12),%r14
```
`%r14` is loaded with the address of `folio->mapping` (i.e., `&folio->mapping`). This confirms that **`%r12` is the `folio` pointer**.
### 2. The Crash and NULL `folio->mapping`
The assembly shows a standard KASAN (Kernel Address Sanitizer) instrumentation sequence for a pointer dereference:
1. **Check `&folio->mapping`:**
```assembly
a: 4c 89 f0 mov %r14,%rax
d: 48 c1 e8 03 shr $0x3,%rax
11: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1)
```
This checks if the memory containing the `mapping` pointer itself is accessible.
2. **Load `folio->mapping`:**
```assembly
20: 4d 8b 36 mov (%r14),%r14
```
`%r14` now holds the value of `folio->mapping`.
3. **Check `folio->mapping` (The Trapping Instruction):**
```assembly
23: 4c 89 f0 mov %r14,%rax
26: 48 c1 e8 03 shr $0x3,%rax
2a: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1) <-- trapping instruction
```
This is the KASAN check for the address stored in `%r14` (the `mapping` pointer) before it is dereferenced. You noted that `%rax` is 0 here. Since `%rax` is derived from `%r14 >> 3`, this means **`%r14` (the `mapping` pointer) is NULL (0)**.
4. **Dereference `folio->mapping->host`:**
The instruction at offset `39` would have been the actual dereference:
```assembly
39: 49 8b 1e mov (%r14),%rbx
```
Since `host` is the first member of `struct address_space` (offset 0), `folio->mapping->host` is equivalent to `*folio->mapping`.
### Conclusion
The crash is indeed caused by **`folio->mapping` being NULL**. KASAN detects the attempt to dereference a NULL pointer at line 428 (in `metapage_write_folio`) and traps at the shadow memory check.
In JFS, this can happen if a metapage is being released or forced while its underlying folio has already been truncated or removed from the page cache (which sets `folio->mapping` to NULL). Specifically, if `release_metapage` is called on a dirty metapage after the folio has been invalidated, it attempts to write the page via `metapage_write_one` -> `metapage_write_folio`, leading to this NULL dereference. |
| 30/4 | 2026/03/16 15:19 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 2793 output: 28 thoughts: 99 Thoughts: **Analyzing a Potential Kernel Crash** Okay, so the user is reporting a crash specifically at `fs/jfs/jfs_metapage.c:428`, and they're suspecting the dreaded `folio->mapping` being `NULL`. Classic. The line in question, `struct inode *inode = folio->mapping->host;`, is a pretty standard access pattern, and the KASAN check in the assembly suggests that there's at least *some* attempt to validate things before the dereference, which is good but not foolproof in every situation. Given that this is a reported crash, my immediate thought is to confirm or deny the `folio->mapping` being null hypothesis. This is often the prime culprit in such scenarios. I need to dig into the surrounding code and see what possible scenarios could lead `folio->mapping` to be uninitialized or otherwise invalid at this point. I'll need to scrutinize the `jfs_metapage.c` file and how folios are handled in the JFS filesystem. The assembly gives me a point of reference if needed, to confirm the condition of the variables at runtime. My next step will be to explore the code to see what scenarios might be at play that could lead to the crash. |
| 31/4 | 2026/03/16 15:19 | tool | read-file |
0mArgs:map[File:fs/jfs/jfs_metapage.c] Results: map[Contents:// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright (C) International Business Machines Corp., 2000-2005
* Portions Copyright (C) Christoph Hellwig, 2001-2002
*/
#include <linux/blkdev.h>
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/bio.h>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/buffer_head.h>
#include <linux/mempool.h>
#include <linux/seq_file.h>
#include <linux/writeback.h>
#include <linux/migrate.h>
#include "jfs_incore.h"
#include "jfs_superblock.h"
#include "jfs_filsys.h"
#include "jfs_metapage.h"
#include "jfs_txnmgr.h"
#include "jfs_debug.h"
#ifdef CONFIG_JFS_STATISTICS
static struct {
uint pagealloc; /* # of page allocations */
uint pagefree; /* # of page frees */
uint lockwait; /* # of sleeping lock_metapage() calls */
} mpStat;
#endif
#define metapage_locked(mp) test_bit(META_locked, &(mp)->flag)
#define trylock_metapage(mp) test_and_set_bit_lock(META_locked, &(mp)->flag)
static inline void unlock_metapage(struct metapage *mp)
{
clear_bit_unlock(META_locked, &mp->flag);
wake_up(&mp->wait);
}
static inline void __lock_metapage(struct metapage *mp)
{
DECLARE_WAITQUEUE(wait, current);
INCREMENT(mpStat.lockwait);
add_wait_queue_exclusive(&mp->wait, &wait);
do {
set_current_state(TASK_UNINTERRUPTIBLE);
if (metapage_locked(mp)) {
folio_unlock(mp->folio);
io_schedule();
folio_lock(mp->folio);
}
} while (trylock_metapage(mp));
__set_current_state(TASK_RUNNING);
remove_wait_queue(&mp->wait, &wait);
}
/*
* Must have mp->folio locked
*/
static inline void lock_metapage(struct metapage *mp)
{
if (trylock_metapage(mp))
__lock_metapage(mp);
}
#define METAPOOL_MIN_PAGES 32
static struct kmem_cache *metapage_cache;
static mempool_t *metapage_mempool;
#define MPS_PER_PAGE (PAGE_SIZE >> L2PSIZE)
#if MPS_PER_PAGE > 1
struct meta_anchor {
int mp_count;
atomic_t io_count;
blk_status_t status;
struct metapage *mp[MPS_PER_PAGE];
};
static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
{
struct meta_anchor *anchor = folio->private;
if (!anchor)
return NULL;
return anchor->mp[offset >> L2PSIZE];
}
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a;
int index;
int l2mp_blocks; /* log2 blocks per metapage */
a = folio->private;
if (!a) {
a = kzalloc_obj(struct meta_anchor, GFP_NOFS);
if (!a)
return -ENOMEM;
folio_attach_private(folio, a);
kmap(&folio->page);
}
if (mp) {
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
a->mp_count++;
a->mp[index] = mp;
}
return 0;
}
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a = folio->private;
int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
int index;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
BUG_ON(a->mp[index] != mp);
a->mp[index] = NULL;
if (--a->mp_count == 0) {
kfree(a);
folio_detach_private(folio);
kunmap(&folio->page);
}
}
static inline void inc_io(struct folio *folio)
{
struct meta_anchor *anchor = folio->private;
atomic_inc(&anchor->io_count);
}
static inline void dec_io(struct folio *folio, blk_status_t status,
void (*handler)(struct folio *, blk_status_t))
{
struct meta_anchor *anchor = folio->private;
if (anchor->status == BLK_STS_OK)
anchor->status = status;
if (atomic_dec_and_test(&anchor->io_count))
handler(folio, anchor->status);
}
#ifdef CONFIG_MIGRATION
static int __metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
struct meta_anchor *src_anchor = src->private;
struct metapage *mps[MPS_PER_PAGE] = {0};
struct metapage *mp;
int i, rc;
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = src_anchor->mp[i];
if (mp && metapage_locked(mp))
return -EAGAIN;
}
rc = filemap_migrate_folio(mapping, dst, src, mode);
if (rc)
return rc;
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = src_anchor->mp[i];
if (!mp)
continue;
if (unlikely(insert_metapage(dst, mp))) {
/* If error, roll-back previosly inserted pages */
for (int j = 0 ; j < i; j++) {
if (mps[j])
remove_metapage(dst, mps[j]);
}
return -EAGAIN;
}
mps[i] = mp;
}
/* Update the metapage and remove it from src */
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = mps[i];
if (mp) {
int page_offset = mp->data - folio_address(src);
mp->data = folio_address(dst) + page_offset;
mp->folio = dst;
remove_metapage(src, mp);
}
}
return 0;
}
#endif /* CONFIG_MIGRATION */
#else
static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
{
return folio->private;
}
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
if (mp) {
folio_attach_private(folio, mp);
kmap(&folio->page);
}
return 0;
}
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
folio_detach_private(folio);
kunmap(&folio->page);
}
#define inc_io(folio) do {} while(0)
#define dec_io(folio, status, handler) handler(folio, status)
#ifdef CONFIG_MIGRATION
static int __metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
struct metapage *mp;
int page_offset;
int rc;
mp = folio_to_mp(src, 0);
if (metapage_locked(mp))
return -EAGAIN;
rc = filemap_migrate_folio(mapping, dst, src, mode);
if (rc)
return rc;
if (unlikely(insert_metapage(dst, mp)))
return -EAGAIN;
page_offset = mp->data - folio_address(src);
mp->data = folio_address(dst) + page_offset;
mp->folio = dst;
remove_metapage(src, mp);
return 0;
}
#endif /* CONFIG_MIGRATION */
#endif
static inline struct metapage *alloc_metapage(gfp_t gfp_mask)
{
struct metapage *mp = mempool_alloc(metapage_mempool, gfp_mask);
if (mp) {
mp->lid = 0;
mp->lsn = 0;
mp->data = NULL;
mp->clsn = 0;
mp->log = NULL;
init_waitqueue_head(&mp->wait);
}
return mp;
}
static inline void free_metapage(struct metapage *mp)
{
mempool_free(mp, metapage_mempool);
}
int __init metapage_init(void)
{
/*
* Allocate the metapage structures
*/
metapage_cache = kmem_cache_create("jfs_mp", sizeof(struct metapage),
0, 0, NULL);
if (metapage_cache == NULL)
return -ENOMEM;
metapage_mempool = mempool_create_slab_pool(METAPOOL_MIN_PAGES,
metapage_cache);
if (metapage_mempool == NULL) {
kmem_cache_destroy(metapage_cache);
return -ENOMEM;
}
return 0;
}
void metapage_exit(void)
{
mempool_destroy(metapage_mempool);
kmem_cache_destroy(metapage_cache);
}
static inline void drop_metapage(struct folio *folio, struct metapage *mp)
{
if (mp->count || mp->nohomeok || test_bit(META_dirty, &mp->flag) ||
test_bit(META_io, &mp->flag))
return;
remove_metapage(folio, mp);
INCREMENT(mpStat.pagefree);
free_metapage(mp);
}
/*
* Metapage address space operations
*/
static sector_t metapage_get_blocks(struct inode *inode, sector_t lblock,
int *len)
{
int rc = 0;
int xflag;
s64 xaddr;
sector_t file_blocks = (inode->i_size + inode->i_sb->s_blocksize - 1) >>
inode->i_blkbits;
if (lblock >= file_blocks)
return 0;
if (lblock + *len > file_blocks)
*len = file_blocks - lblock;
if (inode->i_ino) {
rc = xtLookup(inode, (s64)lblock, *len, &xflag, &xaddr, len, 0);
if ((rc == 0) && *len)
lblock = (sector_t)xaddr;
else
lblock = 0;
} /* else no mapping */
return lblock;
}
static void last_read_complete(struct folio *folio, blk_status_t status)
{
if (status)
printk(KERN_ERR "Read error %d at %#llx\n", status,
folio_pos(folio));
folio_end_read(folio, status == 0);
}
static void metapage_read_end_io(struct bio *bio)
{
struct folio *folio = bio->bi_private;
dec_io(folio, bio->bi_status, last_read_complete);
bio_put(bio);
}
static void remove_from_logsync(struct metapage *mp)
{
struct jfs_log *log = mp->log;
unsigned long flags;
/*
* This can race. Recheck that log hasn't been set to null, and after
* acquiring logsync lock, recheck lsn
*/
if (!log)
return;
LOGSYNC_LOCK(log, flags);
if (mp->lsn) {
mp->log = NULL;
mp->lsn = 0;
mp->clsn = 0;
log->count--;
list_del(&mp->synclist);
}
LOGSYNC_UNLOCK(log, flags);
}
static void last_write_complete(struct folio *folio, blk_status_t status)
{
struct metapage *mp;
unsigned int offset;
if (status) {
int err = blk_status_to_errno(status);
printk(KERN_ERR "metapage_write_end_io: I/O error\n");
mapping_set_error(folio->mapping, err);
}
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (mp && test_bit(META_io, &mp->flag)) {
if (mp->lsn)
remove_from_logsync(mp);
clear_bit(META_io, &mp->flag);
}
/*
* I'd like to call drop_metapage here, but I don't think it's
* safe unless I have the page locked
*/
}
folio_end_writeback(folio);
}
static void metapage_write_end_io(struct bio *bio)
{
struct folio *folio = bio->bi_private;
BUG_ON(!folio->private);
dec_io(folio, bio->bi_status, last_write_complete);
bio_put(bio);
}
static int metapage_write_folio(struct folio *folio,
struct writeback_control *wbc)
{
struct bio *bio = NULL;
int block_offset; /* block offset of mp within page */
struct inode *inode = folio->mapping->host;
int blocks_per_mp = JFS_SBI(inode->i_sb)->nbperpage;
int len;
int xlen;
struct metapage *mp;
int redirty = 0;
sector_t lblock;
int nr_underway = 0;
sector_t pblock;
sector_t next_block = 0;
sector_t page_start;
unsigned long bio_bytes = 0;
unsigned long bio_offset = 0;
int offset;
int bad_blocks = 0;
page_start = folio_pos(folio) >> inode->i_blkbits;
BUG_ON(!folio_test_locked(folio));
BUG_ON(folio_test_writeback(folio));
folio_start_writeback(folio);
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp || !test_bit(META_dirty, &mp->flag))
continue;
if (mp->nohomeok && !test_bit(META_forcewrite, &mp->flag)) {
redirty = 1;
/*
* Make sure this page isn't blocked indefinitely.
* If the journal isn't undergoing I/O, push it
*/
if (mp->log && !(mp->log->cflag & logGC_PAGEOUT))
jfs_flush_journal(mp->log, 0);
continue;
}
clear_bit(META_dirty, &mp->flag);
set_bit(META_io, &mp->flag);
block_offset = offset >> inode->i_blkbits;
lblock = page_start + block_offset;
if (bio) {
if (xlen && lblock == next_block) {
/* Contiguous, in memory & on disk */
len = min(xlen, blocks_per_mp);
xlen -= len;
bio_bytes += len << inode->i_blkbits;
continue;
}
/* Not contiguous */
bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
/*
* Increment counter before submitting i/o to keep
* count from hitting zero before we're through
*/
inc_io(folio);
if (!bio->bi_iter.bi_size)
goto dump_bio;
submit_bio(bio);
nr_underway++;
bio = NULL;
} else
inc_io(folio);
xlen = (folio_size(folio) - offset) >> inode->i_blkbits;
pblock = metapage_get_blocks(inode, lblock, &xlen);
if (!pblock) {
printk(KERN_ERR "JFS: metapage_get_blocks failed\n");
/*
* We already called inc_io(), but can't cancel it
* with dec_io() until we're done with the page
*/
bad_blocks++;
continue;
}
len = min(xlen, (int)JFS_SBI(inode->i_sb)->nbperpage);
bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_WRITE, GFP_NOFS);
bio->bi_iter.bi_sector = pblock << (inode->i_blkbits - 9);
bio->bi_end_io = metapage_write_end_io;
bio->bi_private = folio;
/* Don't call bio_add_page yet, we may add to this vec */
bio_offset = offset;
bio_bytes = len << inode->i_blkbits;
xlen -= len;
next_block = lblock + len;
}
if (bio) {
bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
if (!bio->bi_iter.bi_size)
goto dump_bio;
submit_bio(bio);
nr_underway++;
}
if (redirty)
folio_redirty_for_writepage(wbc, folio);
folio_unlock(folio);
if (bad_blocks)
goto err_out;
if (nr_underway == 0)
folio_end_writeback(folio);
return 0;
dump_bio:
print_hex_dump(KERN_ERR, "JFS: dump of bio: ", DUMP_PREFIX_ADDRESS, 16,
4, bio, sizeof(*bio), 0);
bio_put(bio);
folio_unlock(folio);
dec_io(folio, BLK_STS_OK, last_write_complete);
err_out:
while (bad_blocks--)
dec_io(folio, BLK_STS_OK, last_write_complete);
return -EIO;
}
static int metapage_writepages(struct address_space *mapping,
struct writeback_control *wbc)
{
struct blk_plug plug;
struct folio *folio = NULL;
int err;
blk_start_plug(&plug);
while ((folio = writeback_iter(mapping, wbc, folio, &err)))
err = metapage_write_folio(folio, wbc);
blk_finish_plug(&plug);
return err;
}
static int metapage_read_folio(struct file *fp, struct folio *folio)
{
struct inode *inode = folio->mapping->host;
struct bio *bio = NULL;
int block_offset;
int blocks_per_page = i_blocks_per_folio(inode, folio);
sector_t page_start; /* address of page in fs blocks */
sector_t pblock;
int xlen;
unsigned int len;
int offset;
BUG_ON(!folio_test_locked(folio));
page_start = folio_pos(folio) >> inode->i_blkbits;
block_offset = 0;
while (block_offset < blocks_per_page) {
xlen = blocks_per_page - block_offset;
pblock = metapage_get_blocks(inode, page_start + block_offset,
&xlen);
if (pblock) {
if (!folio->private)
insert_metapage(folio, NULL);
inc_io(folio);
if (bio)
submit_bio(bio);
bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_READ,
GFP_NOFS);
bio->bi_iter.bi_sector =
pblock << (inode->i_blkbits - 9);
bio->bi_end_io = metapage_read_end_io;
bio->bi_private = folio;
len = xlen << inode->i_blkbits;
offset = block_offset << inode->i_blkbits;
bio_add_folio_nofail(bio, folio, len, offset);
block_offset += xlen;
} else
block_offset++;
}
if (bio)
submit_bio(bio);
else
folio_unlock(folio);
return 0;
}
static bool metapage_release_folio(struct folio *folio, gfp_t gfp_mask)
{
struct metapage *mp;
bool ret = true;
int offset;
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp)
continue;
jfs_info("metapage_release_folio: mp = 0x%p", mp);
if (mp->count || mp->nohomeok ||
test_bit(META_dirty, &mp->flag)) {
jfs_info("count = %ld, nohomeok = %d", mp->count,
mp->nohomeok);
ret = false;
continue;
}
if (mp->lsn)
remove_from_logsync(mp);
remove_metapage(folio, mp);
INCREMENT(mpStat.pagefree);
free_metapage(mp);
}
return ret;
}
#ifdef CONFIG_MIGRATION
/*
* metapage_migrate_folio - Migration function for JFS metapages
*/
static int metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
int expected_count;
if (!src->private)
return filemap_migrate_folio(mapping, dst, src, mode);
/* Check whether page does not have extra refs before we do more work */
expected_count = folio_expected_ref_count(src) + 1;
if (folio_ref_count(src) != expected_count)
return -EAGAIN;
return __metapage_migrate_folio(mapping, dst, src, mode);
}
#else
#define metapage_migrate_folio NULL
#endif /* CONFIG_MIGRATION */
static void metapage_invalidate_folio(struct folio *folio, size_t offset,
size_t length)
{
BUG_ON(offset || length < folio_size(folio));
BUG_ON(folio_test_writeback(folio));
metapage_release_folio(folio, 0);
}
const struct address_space_operations jfs_metapage_aops = {
.read_folio = metapage_read_folio,
.writepages = metapage_writepages,
.release_folio = metapage_release_folio,
.invalidate_folio = metapage_invalidate_folio,
.dirty_folio = filemap_dirty_folio,
.migrate_folio = metapage_migrate_folio,
};
struct metapage *__get_metapage(struct inode *inode, unsigned long lblock,
unsigned int size, int absolute,
unsigned long new)
{
int l2BlocksPerPage;
int l2bsize;
struct address_space *mapping;
struct metapage *mp = NULL;
struct folio *folio;
unsigned long page_index;
unsigned long page_offset;
jfs_info("__get_metapage: ino = %ld, lblock = 0x%lx, abs=%d",
inode->i_ino, lblock, absolute);
l2bsize = inode->i_blkbits;
l2BlocksPerPage = PAGE_SHIFT - l2bsize;
page_index = lblock >> l2BlocksPerPage;
page_offset = (lblock - (page_index << l2BlocksPerPage)) << l2bsize;
if ((page_offset + size) > PAGE_SIZE) {
jfs_err("MetaData crosses page boundary!!");
jfs_err("lblock = %lx, size = %d", lblock, size);
dump_stack();
return NULL;
}
if (absolute)
mapping = JFS_SBI(inode->i_sb)->direct_inode->i_mapping;
else {
/*
* If an nfs client tries to read an inode that is larger
* than any existing inodes, we may try to read past the
* end of the inode map
*/
if ((lblock << inode->i_blkbits) >= inode->i_size)
return NULL;
mapping = inode->i_mapping;
}
if (new && (PSIZE == PAGE_SIZE)) {
folio = filemap_grab_folio(mapping, page_index);
if (IS_ERR(folio)) {
jfs_err("filemap_grab_folio failed!");
return NULL;
}
folio_mark_uptodate(folio);
} else {
folio = read_mapping_folio(mapping, page_index, NULL);
if (IS_ERR(folio)) {
jfs_err("read_mapping_page failed!");
return NULL;
}
folio_lock(folio);
}
mp = folio_to_mp(folio, page_offset);
if (mp) {
if (mp->logical_size != size) {
jfs_error(inode->i_sb,
"get_mp->logical_size != size\n");
jfs_err("logical_size = %d, size = %d",
mp->logical_size, size);
dump_stack();
goto unlock;
}
mp->count++;
lock_metapage(mp);
if (test_bit(META_discard, &mp->flag)) {
if (!new) {
jfs_error(inode->i_sb,
"using a discarded metapage\n");
discard_metapage(mp);
goto unlock;
}
clear_bit(META_discard, &mp->flag);
}
} else {
INCREMENT(mpStat.pagealloc);
mp = alloc_metapage(GFP_NOFS);
if (!mp)
goto unlock;
mp->folio = folio;
mp->sb = inode->i_sb;
mp->flag = 0;
mp->xflag = COMMIT_PAGE;
mp->count = 1;
mp->nohomeok = 0;
mp->logical_size = size;
mp->data = folio_address(folio) + page_offset;
mp->index = lblock;
if (unlikely(insert_metapage(folio, mp))) {
free_metapage(mp);
goto unlock;
}
lock_metapage(mp);
}
if (new) {
jfs_info("zeroing mp = 0x%p", mp);
memset(mp->data, 0, PSIZE);
}
folio_unlock(folio);
jfs_info("__get_metapage: returning = 0x%p data = 0x%p", mp, mp->data);
return mp;
unlock:
folio_unlock(folio);
return NULL;
}
void grab_metapage(struct metapage * mp)
{
jfs_info("grab_metapage: mp = 0x%p", mp);
folio_get(mp->folio);
folio_lock(mp->folio);
mp->count++;
lock_metapage(mp);
folio_unlock(mp->folio);
}
static int metapage_write_one(struct folio *folio)
{
struct address_space *mapping = folio->mapping;
struct writeback_control wbc = {
.sync_mode = WB_SYNC_ALL,
.nr_to_write = folio_nr_pages(folio),
};
int ret = 0;
BUG_ON(!folio_test_locked(folio));
folio_wait_writeback(folio);
if (folio_clear_dirty_for_io(folio)) {
folio_get(folio);
ret = metapage_write_folio(folio, &wbc);
if (ret == 0)
folio_wait_writeback(folio);
folio_put(folio);
} else {
folio_unlock(folio);
}
if (!ret)
ret = filemap_check_errors(mapping);
return ret;
}
void force_metapage(struct metapage *mp)
{
struct folio *folio = mp->folio;
jfs_info("force_metapage: mp = 0x%p", mp);
set_bit(META_forcewrite, &mp->flag);
clear_bit(META_sync, &mp->flag);
folio_get(folio);
folio_lock(folio);
folio_mark_dirty(folio);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
clear_bit(META_forcewrite, &mp->flag);
folio_put(folio);
}
void hold_metapage(struct metapage *mp)
{
folio_lock(mp->folio);
}
void put_metapage(struct metapage *mp)
{
if (mp->count || mp->nohomeok) {
/* Someone else will release this */
folio_unlock(mp->folio);
return;
}
folio_get(mp->folio);
mp->count++;
lock_metapage(mp);
folio_unlock(mp->folio);
release_metapage(mp);
}
void release_metapage(struct metapage * mp)
{
struct folio *folio = mp->folio;
jfs_info("release_metapage: mp = 0x%p, flag = 0x%lx", mp, mp->flag);
folio_lock(folio);
unlock_metapage(mp);
assert(mp->count);
if (--mp->count || mp->nohomeok) {
folio_unlock(folio);
folio_put(folio);
return;
}
if (test_bit(META_dirty, &mp->flag)) {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
} else if (mp->lsn) /* discard_metapage doesn't remove it */
remove_from_logsync(mp);
/* Try to keep metapages from using up too much memory */
drop_metapage(folio, mp);
folio_unlock(folio);
folio_put(folio);
}
void __invalidate_metapages(struct inode *ip, s64 addr, int len)
{
sector_t lblock;
int l2BlocksPerPage = PAGE_SHIFT - ip->i_blkbits;
int BlocksPerPage = 1 << l2BlocksPerPage;
/* All callers are interested in block device's mapping */
struct address_space *mapping =
JFS_SBI(ip->i_sb)->direct_inode->i_mapping;
struct metapage *mp;
unsigned int offset;
/*
* Mark metapages to discard. They will eventually be
* released, but should not be written.
*/
for (lblock = addr & ~(BlocksPerPage - 1); lblock < addr + len;
lblock += BlocksPerPage) {
struct folio *folio = filemap_lock_folio(mapping,
lblock >> l2BlocksPerPage);
if (IS_ERR(folio))
continue;
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp)
continue;
if (mp->index < addr)
continue;
if (mp->index >= addr + len)
break;
clear_bit(META_dirty, &mp->flag);
set_bit(META_discard, &mp->flag);
if (mp->lsn)
remove_from_logsync(mp);
}
folio_unlock(folio);
folio_put(folio);
}
}
#ifdef CONFIG_JFS_STATISTICS
int jfs_mpstat_proc_show(struct seq_file *m, void *v)
{
seq_printf(m,
"JFS Metapage statistics\n"
"=======================\n"
"page allocations = %d\n"
"page frees = %d\n"
"lock waits = %d\n",
mpStat.pagealloc,
mpStat.pagefree,
mpStat.lockwait);
return 0;
}
#endif
] |
| 32/4 | 2026/03/16 15:19 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 11769 output: 19 thoughts: 1021 Thoughts: **Analyzing the Crash in `jfs_metapage.c`** Okay, the user pointed to a crash at `fs/jfs/jfs_metapage.c:428`. First, I want to verify the context of that line within the code I have. I quickly checked the `metapage_write_folio` function in the file. Since the `read-file` output didn't include line numbers, I just mentally scanned the code, knowing the function started around line 421. I quickly confirmed that line 426 contains `struct inode *inode = folio->mapping->host;`. Now, according to the user, the crash happened at line 428, but looking at the code, there's only a single line between 426. Something doesn't add up, which leads me to think that either the kernel version is different, or the provided information is slightly off. Next, I need to understand what's happening in the assembly code, I am going to try to map the assembly code to the C code. I see that `%r12` is likely the `folio` pointer and that `0x18(%r12)` is being loaded into `%r14`. The assembly code then checks the value at that memory location using KASAN (Kernel Address Sanitizer). This pattern of shifting the pointer and comparing the value is typical for KASAN shadow checks, confirming the pointer is valid. The code then dereferences `%r14` with `mov (%r14), %r14` and checks its validity. It seems like the assembly code is dereferencing `folio` twice. If `%r12` is indeed `folio`, then the offset `0x18(%r12)` points to `folio->mapping` since offset 0x18 within the `folio` struct is the `mapping` field. Now, I have to examine the layout of the `struct folio` to confirm this mapping and find out what that second dereference is pointing to. This crash appears to be due to KASAN detecting an invalid memory access during the dereference of the `folio` structure. |
| 33/4 | 2026/03/16 15:19 | tool | codesearch-struct-layout |
0mArgs:map[Name:folio] Results: map[Fields:[map[Name: OffsetBits:0 SizeBits:512] map[Name: OffsetBits:512 SizeBits:512] map[Name: OffsetBits:1024 SizeBits:512] map[Name: OffsetBits:1536 SizeBits:512]]] |
| 34/4 | 2026/03/16 15:19 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 13365 output: 37 thoughts: 60 Thoughts: **Investigating Folio Structure** Okay, so the initial `codesearch-struct-layout` command for `folio` wasn't as helpful as I'd hoped – it didn't give me the individual fields. That suggests it’s a more complex data structure, likely involving some kind of union or perhaps nested structures. Hmm, not surprising, `folio` is a pretty fundamental memory management construct. Alright, time to dig deeper. I need to find the definitive definition of `struct folio` itself. Based on my experience, and where these things usually reside in the Linux kernel, my best bet is to look in `include/linux/mm_types.h`. That's where I would expect to find the core data structures related to memory management. Let’s see what we find there. |
| 35/4 | 2026/03/16 15:19 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:include/linux/mm.h IncludeLines:true Name:folio] Results: map[SourceCode: 357: /**
358: * struct folio - Represents a contiguous set of bytes.
359: * @flags: Identical to the page flags.
360: * @lru: Least Recently Used list; tracks how recently this folio was used.
361: * @mlock_count: Number of times this folio has been pinned by mlock().
362: * @mapping: The file this page belongs to, or refers to the anon_vma for
363: * anonymous memory.
364: * @index: Offset within the file, in units of pages. For anonymous memory,
365: * this is the index from the beginning of the mmap.
366: * @share: number of DAX mappings that reference this folio. See
367: * dax_associate_entry.
368: * @private: Filesystem per-folio data (see folio_attach_private()).
369: * @swap: Used for swp_entry_t if folio_test_swapcache().
370: * @_mapcount: Do not access this member directly. Use folio_mapcount() to
371: * find out how many times this folio is mapped by userspace.
372: * @_refcount: Do not access this member directly. Use folio_ref_count()
373: * to find how many references there are to this folio.
374: * @memcg_data: Memory Control Group data.
375: * @pgmap: Metadata for ZONE_DEVICE mappings
376: * @virtual: Virtual address in the kernel direct map.
377: * @_last_cpupid: IDs of last CPU and last process that accessed the folio.
378: * @_entire_mapcount: Do not use directly, call folio_entire_mapcount().
379: * @_large_mapcount: Do not use directly, call folio_mapcount().
380: * @_nr_pages_mapped: Do not use outside of rmap and debug code.
381: * @_pincount: Do not use directly, call folio_maybe_dma_pinned().
382: * @_nr_pages: Do not use directly, call folio_nr_pages().
383: * @_mm_id: Do not use outside of rmap code.
384: * @_mm_ids: Do not use outside of rmap code.
385: * @_mm_id_mapcount: Do not use outside of rmap code.
386: * @_hugetlb_subpool: Do not use directly, use accessor in hugetlb.h.
387: * @_hugetlb_cgroup: Do not use directly, use accessor in hugetlb_cgroup.h.
388: * @_hugetlb_cgroup_rsvd: Do not use directly, use accessor in hugetlb_cgroup.h.
389: * @_hugetlb_hwpoison: Do not use directly, call raw_hwp_list_head().
390: * @_deferred_list: Folios to be split under memory pressure.
391: * @_unused_slab_obj_exts: Placeholder to match obj_exts in struct slab.
392: *
393: * A folio is a physically, virtually and logically contiguous set
394: * of bytes. It is a power-of-two in size, and it is aligned to that
395: * same power-of-two. It is at least as large as %PAGE_SIZE. If it is
396: * in the page cache, it is at a file offset which is a multiple of that
397: * power-of-two. It may be mapped into userspace at an address which is
398: * at an arbitrary page offset, but its kernel virtual address is aligned
399: * to its size.
400: */
401: struct folio {
402: /* private: don't document the anon union */
403: union {
404: struct {
405: /* public: */
406: memdesc_flags_t flags;
407: union {
408: struct list_head lru;
409: /* private: avoid cluttering the output */
410: /* For the Unevictable "LRU list" slot */
411: struct {
412: /* Avoid compound_head */
413: void *__filler;
414: /* public: */
415: unsigned int mlock_count;
416: /* private: */
417: };
418: /* public: */
419: struct dev_pagemap *pgmap;
420: };
421: struct address_space *mapping;
422: union {
423: pgoff_t index;
424: unsigned long share;
425: };
426: union {
427: void *private;
428: swp_entry_t swap;
429: };
430: atomic_t _mapcount;
431: atomic_t _refcount;
432: #ifdef CONFIG_MEMCG
433: unsigned long memcg_data;
434: #elif defined(CONFIG_SLAB_OBJ_EXT)
435: unsigned long _unused_slab_obj_exts;
436: #endif
437: #if defined(WANT_PAGE_VIRTUAL)
438: void *virtual;
439: #endif
440: #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
441: int _last_cpupid;
442: #endif
443: /* private: the union with struct page is transitional */
444: };
445: struct page page;
446: };
447: union {
448: struct {
449: unsigned long _flags_1;
450: unsigned long _head_1;
451: union {
452: struct {
453: /* public: */
454: atomic_t _large_mapcount;
455: atomic_t _nr_pages_mapped;
456: #ifdef CONFIG_64BIT
457: atomic_t _entire_mapcount;
458: atomic_t _pincount;
459: #endif /* CONFIG_64BIT */
460: mm_id_mapcount_t _mm_id_mapcount[2];
461: union {
462: mm_id_t _mm_id[2];
463: unsigned long _mm_ids;
464: };
465: /* private: the union with struct page is transitional */
466: };
467: unsigned long _usable_1[4];
468: };
469: atomic_t _mapcount_1;
470: atomic_t _refcount_1;
471: /* public: */
472: #ifdef NR_PAGES_IN_LARGE_FOLIO
473: unsigned int _nr_pages;
474: #endif /* NR_PAGES_IN_LARGE_FOLIO */
475: /* private: the union with struct page is transitional */
476: };
477: struct page __page_1;
478: };
479: union {
480: struct {
481: unsigned long _flags_2;
482: unsigned long _head_2;
483: /* public: */
484: struct list_head _deferred_list;
485: #ifndef CONFIG_64BIT
486: atomic_t _entire_mapcount;
487: atomic_t _pincount;
488: #endif /* !CONFIG_64BIT */
489: /* private: the union with struct page is transitional */
490: };
491: struct page __page_2;
492: };
493: union {
494: struct {
495: unsigned long _flags_3;
496: unsigned long _head_3;
497: /* public: */
498: void *_hugetlb_subpool;
499: void *_hugetlb_cgroup;
500: void *_hugetlb_cgroup_rsvd;
501: void *_hugetlb_hwpoison;
502: /* private: the union with struct page is transitional */
503: };
504: struct page __page_3;
505: };
506: };
SourceFile:include/linux/mm_types.h] |
| 36/4 | 2026/03/16 15:19 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 16056 output: 21 thoughts: 717 Thoughts: **Analyzing the Assembly: A Folio's Mapping** Okay, let's break this down. From the C code snippet, I see a `struct folio` definition. The key here is the offset of the `mapping` field. `flags` is an `unsigned long`, typically 8 bytes, and `lru` is a `struct list_head`, which is 16 bytes (2 pointers), so the `mapping` field should indeed be at an offset of 0x18 from the start of the `folio` struct. That `lea` instruction at line 5 confirms that; `%r14` is being loaded with the address of `folio->mapping`. Now, the assembly jumps into some checks and potentially traps. The next two `mov` instructions, followed by the `shr` and `cmpb` sequence, indicate an address validation of sorts. The goal is to verify that what's pointed to by `%r14`, which we established is the address of `folio->mapping`, is a valid memory address. The shift right by 3 on the address in `%rax` makes me think this is checking some kind of alignment or page boundary. The second validation, at line 2a, is what's crucial. We load the content of `folio->mapping` into `%r14`. Now the code attempts to validate *that* address. Since the user mentions this is the trapping instruction, I can deduce `%rax` is zero at this point. That tells me that after the first `mov` at line 20, `%r14` holds NULL because `folio->mapping` must be NULL. The `shr` instruction on the following line doesn't change anything, a bit shift on zero remains zero, therefore the subsequent `cmpb` instruction causes a trap. So, the `folio->mapping` pointer itself is NULL. The following instruction, `mov (%r14), %rbx`, is the critical one. That instruction tries to dereference `folio->mapping` to access its `host` member. Knowing the structure's layout, it seems reasonable to assume `host` would be the first member within the `struct address_space`. This would have caused a segmentation fault and a trap. I need to confirm the structure definition and the placement of the `host` member in `struct address_space` to be certain. But from this disassembly, it seems clear the code is correctly detecting a NULL `folio->mapping` and attempting a dereference, which is where the trap occurs. |
| 37/4 | 2026/03/16 15:19 | tool | codesearch-struct-layout |
0mArgs:map[Name:address_space] Results: map[Fields:[map[Name:host OffsetBits:0 SizeBits:64] map[Name:i_pages OffsetBits:64 SizeBits:1152] map[Name:invalidate_lock OffsetBits:1216 SizeBits:1088] map[Name:gfp_mask OffsetBits:2304 SizeBits:32] map[Name:i_mmap_writable OffsetBits:2336 SizeBits:32] map[Name:i_mmap OffsetBits:2368 SizeBits:128] map[Name:nrpages OffsetBits:2496 SizeBits:64] map[Name:writeback_index OffsetBits:2560 SizeBits:64] map[Name:a_ops OffsetBits:2624 SizeBits:64] map[Name:flags OffsetBits:2688 SizeBits:64] map[Name:wb_err OffsetBits:2752 SizeBits:32] map[Name:i_private_lock OffsetBits:2816 SizeBits:1024] map[Name:i_private_list OffsetBits:3840 SizeBits:128] map[Name:i_mmap_rwsem OffsetBits:3968 SizeBits:1088] map[Name:i_private_data OffsetBits:5056 SizeBits:64]]] |
| 38/4 | 2026/03/16 15:19 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 17691 output: 22 thoughts: 2291 Thoughts: **Analysis of the Folio Mapping Dereference Crash** Okay, so the issue boils down to a NULL pointer dereference, specifically within `metapage_write_folio`. I've traced the crash through the assembly, confirming that `%r12` holds the `folio` address, and `%r14` is indeed calculated as `folio->mapping` (at offset 0x18 in the `folio` struct). The crucial instruction is the KASAN check `cmpb $0x0,(%rax,%r15,1)`, which is failing because `%rax` (which holds the value of `folio->mapping` shifted right by 3) is 0, indicating that `folio->mapping` is NULL. I've examined the calling contexts of `metapage_write_folio`: `metapage_writepages` and `metapage_write_one`. Both *should* ensure that the `folio` has a valid `mapping`. `writeback_iter` in `metapage_writepages` *should* only provide folios that have mappings, and `metapage_write_one` retrieves the mapping via `folio->mapping`. The folio is locked in both cases, which *should* prevent the mapping from becoming NULL during the write process. I've checked the definition of the `folio` struct, and confirmed the mapping offset. I'm also considering the JFS metapage specifics, and how they may relate to either the direct inode or regular inode's mapping. I suspect something is causing the `folio->mapping` to become NULL *after* it's been initialized and *before* it's used in `metapage_write_folio`. Perhaps there's a race condition related to truncation or detachment, or a subtle bug in how the JFS metapages handle mappings or inode destruction. The fact that the folio is locked is throwing me off, since that *should* prevent the mapping from becoming NULL. The crash is happening within a KASAN check, which means it’s detecting an illegal memory access, and that is caused by the NULL mapping. It's likely an issue where a folio is being written back, but has been somehow unlinked from its mapping (likely an inode or page cache issue). I need to look closer at the locking primitives and how folios are handled in JFS, especially under conditions like inode destruction or truncation. I'm also going to need to look into other access to the writeback control to ensure `wbc` is not the cause. The key is figuring out the *sequence* of events that leads to `folio->mapping` becoming NULL after the folio lock is acquired, likely a race condition of some kind. |
| 39/4 | 2026/03/16 15:20 | tool | codesearch-struct-layout |
0mArgs:map[Name:writeback_control] Results: map[Fields:[map[Name:nr_to_write OffsetBits:0 SizeBits:64] map[Name:pages_skipped OffsetBits:64 SizeBits:64] map[Name:range_start OffsetBits:128 SizeBits:64] map[Name:range_end OffsetBits:192 SizeBits:64] map[Name:sync_mode OffsetBits:256 SizeBits:32] map[Name:for_kupdate OffsetBits:288 SizeBits:1] map[Name:for_background OffsetBits:289 SizeBits:1] map[Name:tagged_writepages OffsetBits:290 SizeBits:1] map[Name:range_cyclic OffsetBits:291 SizeBits:1] map[Name:for_sync OffsetBits:292 SizeBits:1] map[Name:unpinned_netfs_wb OffsetBits:293 SizeBits:1] map[Name:no_cgroup_owner OffsetBits:294 SizeBits:1] map[Name:fbatch OffsetBits:320 SizeBits:2048] map[Name:index OffsetBits:2368 SizeBits:64] map[Name:saved_err OffsetBits:2432 SizeBits:32] map[Name:wb OffsetBits:2496 SizeBits:64] map[Name:inode OffsetBits:2560 SizeBits:64] map[Name:wb_id OffsetBits:2624 SizeBits:32] map[Name:wb_lcand_id OffsetBits:2656 SizeBits:32] map[Name:wb_tcand_id OffsetBits:2688 SizeBits:32] map[Name:wb_bytes OffsetBits:2752 SizeBits:64] map[Name:wb_lcand_bytes OffsetBits:2816 SizeBits:64] map[Name:wb_tcand_bytes OffsetBits:2880 SizeBits:64]]] |
| 40/4 | 2026/03/16 15:20 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 21098 output: 23 thoughts: 472 Thoughts: **Debugging a Potential NULL Pointer Dereference in JFS** Okay, so I'm looking at this code and trying to figure out why we're potentially hitting a NULL pointer dereference. The `struct writeback_control` doesn't seem to have anything interesting at offset 0x18, so that's not it. `range_end` is at 24 and it's a `loff_t` (long long), not a pointer. It looks more like a `struct folio` is being used, judging by the offsets. Let's revisit that assembly... ```assembly 5: 4d 8d 74 24 18 lea 0x18(%r12),%r14 ... 20: 4d 8b 36 mov (%r14),%r14 ... 39: 49 8b 1e mov (%r14),%rbx ``` Ah, that confirms it: `%r14` is being loaded with `0x18(%r12)`. Then, `%r14` is dereferenced. Then `%r14` is dereferenced again. That looks *exactly* like: `%r14 = &folio->mapping;` `%r14 = folio->mapping;` `%rbx = folio->mapping->host;` So, `(%r14)` at line `39` looks like `folio->mapping->host`, making `%rbx` equal to `inode`. So that looks like it corresponds perfectly with the line `struct inode *inode = folio->mapping->host;`. Now, the critical question: why is `folio->mapping` NULL? In JFS, as I recall, `metapage_write_folio` is called for folios that contain metapages. Those folios should, in theory, have a mapping – either a block device mapping or an inode mapping. If a folio is in writeback, it *should* have a mapping. I wonder if it could be related to inode eviction. Maybe the mapping is cleared during eviction, but if the folio is locked, that *shouldn't* happen. Time to investigate potential JFS issues. I'll search for recent changes in `fs/jfs/jfs_metapage.c` to see if there are any known scenarios where `folio->mapping` could become NULL during this process. Let's see if there's a recent patch that addresses this, or if it's a known, unresolved issue. |
| 41/4 | 2026/03/16 15:20 | tool | codesearch-dir-index |
0mArgs:map[Dir:fs/jfs/] Results: map[Files:[acl.c file.c inode.c ioctl.c jfs_acl.h jfs_btree.h jfs_debug.c jfs_debug.h jfs_dinode.h jfs_discard.c jfs_discard.h jfs_dmap.c jfs_dmap.h jfs_dtree.c jfs_dtree.h jfs_extent.c jfs_extent.h jfs_filsys.h jfs_imap.c jfs_imap.h jfs_incore.h jfs_inode.c jfs_inode.h jfs_lock.h jfs_logmgr.c jfs_logmgr.h jfs_metapage.c jfs_metapage.h jfs_mount.c jfs_superblock.h jfs_txnmgr.c jfs_txnmgr.h jfs_types.h jfs_umount.c jfs_unicode.c jfs_unicode.h jfs_xattr.h jfs_xtree.c jfs_xtree.h namei.c resize.c super.c symlink.c xattr.c] Subdirs:<nil>] |
| 42/4 | 2026/03/16 15:20 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 22580 output: 32 thoughts: 4149 Thoughts: **Analysis of a Potential Kernel Crash in JFS Metapage Handling** Okay, I need to figure out why we're seeing this crash at `fs/jfs/jfs_metapage.c:428` inside `metapage_write_folio`. My immediate thought is to check the Git history, but since I can't do that directly, I'll go through the code with a fine-tooth comb. I'll also try a `codesearch-find-references` to see if anyone else is encountering this. My main concern is that `folio->mapping` might be NULL when `metapage_write_folio` is called. Looking at `metapage_writepages`, the `writeback_iter` function is crucial. It gives folios *from* a specific mapping, so folios returned by `writeback_iter` should always have their `mapping` field set to the `mapping` argument passed to `metapage_writepages`. But what if a folio is removed from its mapping *after* `writeback_iter` finds it, but before `metapage_write_folio` uses it? `writeback_iter` locks folios, so it shouldn't be removable after its find, unless the locking mechanisms are incomplete. Perhaps `truncate` or `evict` are the culprits, but they too should require a lock. Wait, what if the `mapping` itself is being destroyed? Then, let's go over `metapage_write_one`. It calls `metapage_write_folio`, and `metapage_write_folio` dereferences `folio->mapping->host`. If `folio->mapping` is NULL, then that's a crash. In JFS, metapages are often associated with the block device's address space. It looks like `JFS_SBI(ip->i_sb)->direct_inode->i_mapping` is the mapping, and if the block device is closing or the filesystem is unmounting, this mapping *could* be going away. I'm checking `release_metapage`. The code there uses `folio_mark_dirty` and possibly calls `metapage_write_one`. If the folio was detached, `folio->mapping` would be NULL. The metapage's folio is set during the metapage's creation. It's never cleared until the metapage is freed. This makes me consider if `metapage_write_folio` is somehow being called on a folio that's not associated with any mapping. Now, let's dig into `__get_metapage`. It always obtains the folio from a mapping, either the inode's mapping or the direct inode's mapping. The folio *must* be derived from a mapping. If `folio->mapping` is NULL, it *must* have been cleared later. So, I see `folio_detach_private` in `remove_metapage`, but that's private data, not the mapping itself. I need to check if JFS uses `folio_detach_mapping` at any point. Normally, the mapping is set to NULL when a page is removed from the page cache. If `metapage_write_folio` is called via `release_metapage`, it is possible the folio has already been removed from the page cache? But `release_metapage` is called when we're done with a metapage. Okay, going back to `metapage_write_one` the code checks `filemap_check_errors`. Calling this function on a NULL mapping would crash. So, if `metapage_write_one` is passed a folio with no mapping, it *should* crash there. But the report specifies a crash in `metapage_write_folio`. Now, focusing on the assembly, the trapping instruction's location indicates the load from `folio->mapping` is failing. With `mov (%r14), %r14` if `folio->mapping` is NULL, then `%r14` is 0. If the load is failing, it's because `%r14` holds a valid pointer but the thing that's pointed to is zero. I can *confirm* that the crash happens because `folio` is valid, but `folio->mapping` is NULL. I need to confirm there is a scenario where JFS uses folios without mappings. `__get_metapage` shows that metapages *should* be acquired from a valid mapping. I am considering what would happen if the mapping was freed... What if the folio *is* a metapage folio, created without a mapping? No, this is not possible because of the use of `filemap_grab_folio` and `read_ |
| 43/4 | 2026/03/16 15:20 | tool | codesearch-file-index |
0mArgs:map[SourceFile:fs/jfs/jfs_metapage.h] Results: map[Entities:[map[Kind:function Name:_metapage_homeok] map[Kind:function Name:discard_metapage] map[Kind:function Name:flush_metapage] map[Kind:function Name:metapage_homeok] map[Kind:function Name:metapage_nohomeok] map[Kind:function Name:metapage_wait_for_io] map[Kind:function Name:write_metapage] map[Kind:struct Name:metapage]]] |
| 44/4 | 2026/03/16 15:20 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 27935 output: 47 thoughts: 22 Thoughts: Here's my take on those thoughts, tailored for an expert audience: **Diving into `write_metapage`** Okay, so I'm mentally pulling up `write_metapage` from `jfs_metapage.h`. I need to re-familiarize myself with its exact signature, purpose, and how it interacts with the rest of the JFS metapage management system. Specifically, I'm thinking about the low-level details of how it handles I/O, potential locking schemes (if any), and any considerations around the data structures it manipulates. Am I correct that this function is the core of writing a complete metapage to disk? I need to quickly recall the implications for crash recovery and journal management. What error codes does it return? Are there any subtle race conditions or atomicity issues that immediately jump to mind that I should be double-checking? I want to make sure I haven't forgotten anything significant related to potential bottlenecks in performance. My aim is to use it as a reference point for some larger changes, so I need to be entirely familiar. |
| 45/4 | 2026/03/16 15:20 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.h IncludeLines:true Name:write_metapage] Results: map[SourceCode: 72: static inline void write_metapage(struct metapage *mp)
73: {
74: set_bit(META_dirty, &mp->flag);
75: release_metapage(mp);
76: }
SourceFile:fs/jfs/jfs_metapage.h] |
| 46/4 | 2026/03/16 15:20 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 28338 output: 47 thoughts: 9 Thoughts: Okay, here's my attempt at summarizing the thought process, tailored for an expert audience, and written in the first person: **Navigating the Metapage Flush** So, I'm thinking about this `flush_metapage` function, and my initial question is: What's the context here? Are we talking about a specific file system implementation, a database system, or perhaps something more generalized? Knowing the architecture is key. I'm assuming we're dealing with a system that uses metapages to manage metadata, but the specifics around how those pages are structured and accessed are critical. My brain's immediately jumping to potential implications. Are we concerned with write-back strategies? Does this function dictate when metadata is actually committed to persistent storage, or is it more about invalidating cached copies? What kind of consistency guarantees are we aiming for? Is this a synchronous operation, or can it be deferred? And if deferred, what are the failure recovery considerations? I need to understand its relationship to other functions in the system. Does it play nicely with the buffer cache, or is it going to create unnecessary contention? Does it interact with journaling or logging mechanisms? Are there any potential performance bottlenecks or locking issues inherent in its design? Finally, are we dealing with hardware constraints? Is there a particular storage device type this is optimized for? Flash memory, spinning disks, NVRAM – those would inform the decisions made in the function. Different storage media have varying access times and write characteristics, and the `flush_metapage` function probably needs to reflect that. |
| 47/4 | 2026/03/16 15:21 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.h IncludeLines:true Name:flush_metapage] Results: map[SourceCode: 78: static inline void flush_metapage(struct metapage *mp)
79: {
80: set_bit(META_sync, &mp->flag);
81: write_metapage(mp);
82: }
SourceFile:fs/jfs/jfs_metapage.h] |
| 48/4 | 2026/03/16 15:21 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 28824 output: 25 thoughts: 927 Thoughts: **Analyzing a JFS Metapage Flush Crash** Okay, so I'm looking at this crash in the JFS file system, specifically a problem during a metapage flush. The code seems to be following this sequence: `flush_metapage` sets some flags, then calls `release_metapage`, which in turn calls `metapage_write_one` if the page is both dirty and needs to be synchronized. The core issue seems to be that `folio->mapping` is NULL at the time of the write attempt. My initial reaction was, "Why is the system trying to flush a page that's already been truncated?" That doesn't make a whole lot of sense generally, unless it is a metadata page that was associated with a file, not the block device itself. I thought maybe some inode truncation was happening, even though JFS metadata isn't usually truncated like that. But, some JFS metadata *is* stored in regular files. However, absolute metapages on the block device itself should *never* be truncated! Then, I thought, wait, if an inode is being evicted, it's pages are truncated. If JFS is still holding a metapage for one of those truncated pages, this is exactly the kind of situation that would cause this problem. Let me go back to the assembly. Looking at the registers, `%r12` holds the `folio` structure. Then, at offset `0x18` from `%r12` we have `folio->mapping`. A subsequent `mov` instruction loads the contents of `folio->mapping` into `%r14`. `%r14` is then zero, and the crash happens when it tries to dereference a memory location based on `%r14`, essentially dereferencing a NULL pointer. This confirms my suspicions that we are indeed dereferencing a null `folio->mapping`. Now, I want to confirm that `%r12` really is `folio` and that there isn't some misinterpretation in the assembly. I'm going back to the function definition `metapage_write_folio(struct folio *folio, struct writeback_control *wbc)`. In x86_64, the first argument is passed in `%rdi`. The assembly shows `lea 0x18(%r12), %r14`. Since `%r12` is callee-saved, I know the function probably saved `%rdi` into `%r12`. I've already checked that `0x18` matches the `mapping` field in the `struct folio`. Let me double-check: is there any overlap with `struct page`? Yes, in many kernel versions, `struct folio` and `struct page` have `mapping` at the same offset (24 bytes in 64-bit). So, yes, I'm confident that: 1. `%r12` holds the `folio` structure. 2. `0x18(%r12)` is indeed `folio->mapping`. 3. The crash *is* occurring because we're dereferencing a NULL `folio->mapping`, which translates to the code: `struct inode *inode = folio->mapping->host;`, where `folio->mapping` is NULL. My next step is to check if this specific NULL mapping issue is a known bug in JFS. I'll search for "JFS folio mapping NULL." Or, maybe `codesearch-find-references` for `metapage_write_folio` will turn up some helpful comments or related code. |
| 49/4 | 2026/03/16 15:21 | tool | codesearch-find-references |
0mArgs:map[Name:metapage_write_folio] Results: map[References:[map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:metapage_write_one SourceFile:fs/jfs/jfs_metapage.c SourceLine:818] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:metapage_writepages SourceFile:fs/jfs/jfs_metapage.c SourceLine:558]] TruncatedOutput:false] |
| 50/4 | 2026/03/16 15:21 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 30682 output: 19 thoughts: 1221 Thoughts: **Analyzing a NULL `folio->mapping` Crash**
Okay, so I've been digging into this crash, and here's my line of thinking. It looks like `metapage_write_folio` is being called from `metapage_write_one`, and that's happening at line 818. I've got the relevant code snippet here:
```c
811: static int metapage_write_one(struct folio *folio)
812: {
813: struct address_space *mapping = folio->mapping;
...
818: ret = metapage_write_folio(folio, &wbc);
```
The problem seems to be that `folio->mapping` can be NULL, and the crash is happening *inside* `metapage_write_folio`. I see that on line 813 `mapping` is assigned the value of `folio->mapping`. Then on line 826 we check if `ret` is 0 and call `filemap_check_errors(mapping)` which would cause a crash if `mapping` is NULL. However, it seems that `metapage_write_folio` is crashing *before* the check at line 826.
I'm thinking that if `folio->mapping` is NULL, then that's the root cause. This could happen if the folio is in the process of being freed or truncated. I'll need to check if there are any known bugs regarding this. While I do that, the user's report is that the crash happens at `cmpb $0x0,(%rax,%r15,1)`, and that `%rax` is zero. I can use the assembly and struct layout to answer their question.
The user says the crash is at `cmpb $0x0,(%rax,%r15,1)` where `%rax` is 0.
Based on the assembly, I think I've got it. This looks like a KASAN check. It's checking the *value* loaded from `folio->mapping`. If `folio->mapping` is NULL, `%r14` will be zero. That zero gets loaded into `%rax`. The KASAN check `cmpb $0x0,(%rax,%r15,1)` is essentially trying to validate that the memory at `%rax` is accessible. Since `%rax` is zero, we're trying to dereference a NULL pointer. KASAN is there to catch exactly this situation. It inserts a check before the actual dereference which would be `mov (%r14), %rbx` later in the assembly.
Let's look at the assembly again.
```assembly
20: 4d 8b 36 mov (%r14),%r14
23: 4c 89 f0 mov %r14,%rax
26: 48 c1 e8 03 shr $0x3,%rax
* 2a: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1) <-- trapping instruction
```
Yes, it's checking the pointer that was just loaded. This `cmpb` is the KASAN check. The crash here is the tell-tale sign of a NULL pointer dereference.
If the pointer is NULL, `%rax` is 0.
The KASAN shadow for address 0 is at `%r15 + 0`.
Accessing `%r15` might be fine, but the shadow value for address 0 will indicate it's not accessible.
Actually, on many systems, the shadow for address 0 is not even mapped, so the `cmpb` itself faults.
Or if it is mapped, it contains a non-zero value, and the code would then call the KASAN error handler.
But here it says "trapping instruction", which usually means a segmentation fault or similar.
If `%rax` is 0 and `%r15` is `0xdffffc0000000000`, then it's accessing `0xdffffc0000000000`.
If that address is not mapped, it faults.
So, in summary: `folio->mapping` *must* be NULL, leading to a NULL pointer dereference.
Now, I need to confirm the layout of `struct folio` to be absolutely certain. The offset of `mapping` (0x18) might vary based on kernel configuration options.
I'm checking `include/linux/mm_
|
| 51/4 | 2026/03/16 15:21 | tool | codesearch-struct-layout |
0mArgs:map[Name:page] Results: map[Fields:[map[Name:flags OffsetBits:0 SizeBits:64] map[Name: OffsetBits:64 SizeBits:320] map[Name: OffsetBits:384 SizeBits:32] map[Name:_refcount OffsetBits:416 SizeBits:32] map[Name:memcg_data OffsetBits:448 SizeBits:64]]] |
| 52/4 | 2026/03/16 15:21 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 33057 output: 21 thoughts: 149 Thoughts: **Analyzing the Page Structure**
Okay, so the `codesearch-struct-layout` output for `page` isn't giving me much clarity, especially with those unions in the mix. Let me double-check the definitive source, `include/linux/mm_types.h`. Right, here it is:
```c
struct page {
unsigned long flags;
union {
struct list_head lru;
...
};
struct address_space *mapping;
};
```
Yep, it's the same structure definition. Alright, based on this, `0x18` offset within the `struct page` almost certainly corresponds to the `mapping` field.
Now, I need to consider all possibilities for `%r12`. We've established it's not the `page` itself. Could it be `mp`, the metapage? Time to investigate the `struct metapage` layout. I need to figure out if it's pointing to the mapping.
|
| 53/4 | 2026/03/16 15:21 | tool | codesearch-struct-layout |
0mArgs:map[Name:metapage] Results: map[Fields:[map[Name:xflag OffsetBits:0 SizeBits:16] map[Name:unused OffsetBits:16 SizeBits:16] map[Name:lid OffsetBits:32 SizeBits:16] map[Name:lsn OffsetBits:64 SizeBits:32] map[Name:synclist OffsetBits:128 SizeBits:128] map[Name:flag OffsetBits:256 SizeBits:64] map[Name:count OffsetBits:320 SizeBits:64] map[Name:data OffsetBits:384 SizeBits:64] map[Name:index OffsetBits:448 SizeBits:64] map[Name:wait OffsetBits:512 SizeBits:1152] map[Name:folio OffsetBits:1664 SizeBits:64] map[Name:sb OffsetBits:1728 SizeBits:64] map[Name:logical_size OffsetBits:1792 SizeBits:32] map[Name:clsn OffsetBits:1824 SizeBits:32] map[Name:nohomeok OffsetBits:1856 SizeBits:32] map[Name:log OffsetBits:1920 SizeBits:64]]] |
| 54/4 | 2026/03/16 15:21 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 33774 output: 19 thoughts: 2318 Thoughts: **Analysis of a KASAN Null Pointer Dereference in `metapage_write_folio`**
Okay, let me walk through this. I'm looking at this assembly and thinking about what's happening, and where the null pointer dereference is happening. I start by breaking down the data structures. The assembly suggests we're dealing with `struct metapage`, which contains fields like `xflag`, `lsn`, `synclist`, and, crucially, a `wait_queue_head_t` and a `folio`. Based on the sizes of each item, the start of the `folio` within `struct metapage` is likely at offset 208 bytes. However, it seems like we aren't looking at *metapage* itself in the assembly but the *folio*.
The assembly instruction `lea 0x18(%r12), %r14` is key. `0x18(%r12)` is offset 24 from `%r12`. Since this is the beginning of the function and after the initial set up it makes a pointer to offset 24 from folio and is loaded into r14, therefore, `%r12` is likely `folio`, and `0x18(%r12)` is `folio->mapping`.
Now, the use of `lea` instead of `mov` here stands out. This is a KASAN check. It loads the *address* of `folio->mapping` to check for validity before dereferencing it. It then loads the value of the pointer. This confirms that KASAN is being used for this pointer dereference. The crashing instruction is the KASAN check on the value of the pointer.
The crash occurs at the address check. This means that the address of `folio->mapping` is valid but the value stored in that memory location is NULL. This is confirmed by the assembly which checks that the value stored at offset r15 is 0.
So, the crash confirms `folio->mapping` is NULL.
I also double-check this:
```c
401: struct folio {
...
406: memdesc_flags_t flags;
407: union {
408: struct list_head lru;
...
420: };
421: struct address_space *mapping;
```
`lru` is a `struct list_head`.
```c
struct list_head {
struct list_head *next, *prev;
};
```
`next` is at offset 0 of `lru` (offset 8 of `folio`).
`prev` is at offset 8 of `lru` (offset 16 of `folio`).
So `mapping` is at offset 24 (0x18) of `folio`.
Yes, it's correct.
The `folio` structure is the standard page structure. `folio->mapping` is indeed at offset 24. This aligns perfectly. The code is trying to access `folio->mapping` which points to `struct address_space`. The assembly confirms this, specifically `mov (%r14), %r14`, after that, `%rbx` gets `folio->mapping->host`.
This crash, therefore, indicates that `folio->mapping` is NULL. This is unexpected because `metapage_write_folio` is typically called during writeback or from `release_metapage` where there should always be a valid mapping unless the folio has been truncated or is no longer associated with its mapping.
I'm now thinking about the context of the JFS filesystem. Is this called for anonymous folios, or just file or device backed folios? Given the nature of metapages in JFS, they are always tied to a file or block device, so there should always be a mapping. This likely happens when the file system is being unmounted or the backing block device is being closed. This is because when a folio is removed from the page cache, `folio->mapping` gets set to NULL.
Therefore, the bug is a JFS code path attempting to write back or operate on a folio that has already been removed from the page cache.
|
| 55/4 | 2026/03/16 15:21 | tool | codesearch-struct-layout |
0mArgs:map[Name:inode] Results: map[Fields:[map[Name:i_mode OffsetBits:0 SizeBits:16] map[Name:i_opflags OffsetBits:16 SizeBits:16] map[Name:i_flags OffsetBits:32 SizeBits:32] map[Name:i_acl OffsetBits:64 SizeBits:64] map[Name:i_default_acl OffsetBits:128 SizeBits:64] map[Name:i_uid OffsetBits:192 SizeBits:32] map[Name:i_gid OffsetBits:224 SizeBits:32] map[Name:i_op OffsetBits:256 SizeBits:64] map[Name:i_sb OffsetBits:320 SizeBits:64] map[Name:i_mapping OffsetBits:384 SizeBits:64] map[Name:i_security OffsetBits:448 SizeBits:64] map[Name:i_ino OffsetBits:512 SizeBits:64] map[Name: OffsetBits:576 SizeBits:32] map[Name:i_rdev OffsetBits:608 SizeBits:32] map[Name:i_size OffsetBits:640 SizeBits:64] map[Name:i_atime_sec OffsetBits:704 SizeBits:64] map[Name:i_mtime_sec OffsetBits:768 SizeBits:64] map[Name:i_ctime_sec OffsetBits:832 SizeBits:64] map[Name:i_atime_nsec OffsetBits:896 SizeBits:32] map[Name:i_mtime_nsec OffsetBits:928 SizeBits:32] map[Name:i_ctime_nsec OffsetBits:960 SizeBits:32] map[Name:i_generation OffsetBits:992 SizeBits:32] map[Name:i_lock OffsetBits:1024 SizeBits:1024] map[Name:i_bytes OffsetBits:2048 SizeBits:16] map[Name:i_blkbits OffsetBits:2064 SizeBits:8] map[Name:i_write_hint OffsetBits:2072 SizeBits:8] map[Name:i_blocks OffsetBits:2112 SizeBits:64] map[Name:i_state OffsetBits:2176 SizeBits:32] map[Name:i_rwsem OffsetBits:2240 SizeBits:1088] map[Name:dirtied_when OffsetBits:3328 SizeBits:64] map[Name:dirtied_time_when OffsetBits:3392 SizeBits:64] map[Name:i_hash OffsetBits:3456 SizeBits:128] map[Name:i_io_list OffsetBits:3584 SizeBits:128] map[Name:i_wb OffsetBits:3712 SizeBits:64] map[Name:i_wb_frn_winner OffsetBits:3776 SizeBits:32] map[Name:i_wb_frn_avg_time OffsetBits:3808 SizeBits:16] map[Name:i_wb_frn_history OffsetBits:3824 SizeBits:16] map[Name:i_lru OffsetBits:3840 SizeBits:128] map[Name:i_sb_list OffsetBits:3968 SizeBits:128] map[Name:i_wb_list OffsetBits:4096 SizeBits:128] map[Name: OffsetBits:4224 SizeBits:128] map[Name:i_version OffsetBits:4352 SizeBits:64] map[Name:i_sequence OffsetBits:4416 SizeBits:64] map[Name:i_count OffsetBits:4480 SizeBits:32] map[Name:i_dio_count OffsetBits:4512 SizeBits:32] map[Name:i_writecount OffsetBits:4544 SizeBits:32] map[Name:i_readcount OffsetBits:4576 SizeBits:32] map[Name: OffsetBits:4608 SizeBits:64] map[Name:i_flctx OffsetBits:4672 SizeBits:64] map[Name:i_data OffsetBits:4736 SizeBits:5120] map[Name: OffsetBits:9856 SizeBits:128] map[Name: OffsetBits:9984 SizeBits:64] map[Name:i_fsnotify_mask OffsetBits:10048 SizeBits:32] map[Name:i_fsnotify_marks OffsetBits:10112 SizeBits:64] map[Name:i_private OffsetBits:10176 SizeBits:64]]] |
| 56/4 | 2026/03/16 15:21 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 38288 output: 938 thoughts: 760 Thoughts: **My Analysis of the Folio Mapping Issue**
Okay, let me walk through this. First, I've got the offsets: `i_sb` at 40 bytes, `i_blkbits` at 258 bytes. But the assembly doesn't get that deep. It shows `mov (%r14), %rbx`, which implies we're looking at `inode = mapping->host`. That's a good starting point. My initial assessment confirms the user's hunch; the evidence aligns.
Now, a crucial check: `0x18(%r12)`. If `%r12` is `folio`, then `0x18` is `mapping`. If it's `page`, same deal: `0x18` is `mapping`. In the kernel, it's pretty common for `folio` and `page` to be interchangeable in the assembly when the addresses are the same. We're dealing with `struct folio *folio` here. The code snippet confirms that:
```c
static int metapage_write_folio(struct folio *folio,
struct writeback_control *wbc)
{
struct bio *bio = NULL;
int block_offset; /* block offset of mp within page */
struct inode *inode = folio->mapping->host;
```
Let's look at the assembly: `lea 0x18(%r12),%r14`. `%r14` is being loaded with `&folio->mapping`. Then we see a KASAN check on `%r14`. Then `mov (%r14), %r14`, meaning `%r14` now holds `folio->mapping`. *That's* where the crash happens. KASAN catches it. `%r14` is 0. So, *`folio->mapping` is NULL*.
Alright, so far, so good. Now, I need to verify: is `folio->mapping` really at `0x18`? Okay, let's break it down: `flags` (8 bytes) + `lru` (16 bytes) = 24 bytes, or `0x18`. Yep, that's correct.
Wait a second, what if `struct list_head` isn't 16 bytes? In 64-bit, it's 2 pointers, so yeah, 16 bytes. Good.
Okay, is there anything *before* `mapping` in the `struct folio` definition? Let's check:
```c
memdesc_flags_t flags;
union {
struct list_head lru;
struct {
void *__filler;
unsigned int mlock_count;
};
struct dev_pagemap *pgmap;
};
struct address_space *mapping;
```
`memdesc_flags_t` is `unsigned long`. The `union` contains `struct list_head` (16 bytes). The other union members are smaller or equal. So the union size is 16. The offset of `mapping` is therefore 8 + 16 = 24. So yes.
Finally, can `folio` possibly *not* be at the start of the allocated memory block? No. A `struct folio *` pointer always points to the beginning of the `struct folio` itself.
Final confirmation:
- `%r12` = `folio`
- `0x18(%r12)` = `folio->mapping`
- `folio->mapping` is NULL.
- The crash is a NULL pointer dereference (KASAN confirmed) when trying to access `folio->mapping->host`.
I'm ready to craft the response now.
|
| 57/2 | 2026/03/16 15:22 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 29466 output: 49 thoughts: 101 Thoughts: **Debugging a Null Folio Mapping** Alright, so I'm hitting a snag here. `folio->mapping` is, as I suspected, `NULL`. Now, the critical question: *why* is it `NULL`? It's clear that the folio has been ejected from the page cache. That much I know. That means `invalidate_folio` must have been called at some point. And, for JFS metapages, the `jfs_metapage_aops.invalidate_folio` function pointer resolves to `metapage_invalidate_folio`. It's time to refresh my memory and step through that function again to understand what's happening. I need to figure out the exact point at which the mapping is being cleared so I can trace this back to the root cause of the removal from the cache. |
| 58/2 | 2026/03/16 15:22 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:metapage_invalidate_folio] Results: map[SourceCode: 664: static void metapage_invalidate_folio(struct folio *folio, size_t offset,
665: size_t length)
666: {
667: BUG_ON(offset || length < folio_size(folio));
668:
669: BUG_ON(folio_test_writeback(folio));
670:
671: metapage_release_folio(folio, 0);
672: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 59/2 | 2026/03/16 15:22 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 29957 output: 49 thoughts: 10 Thoughts: Okay, here's my summary, tailored for an expert audience, in the first person, reflecting the implied thought process: **Internal Audit of `metapage_release_folio`** Alright, let's break down this `metapage_release_folio` concept. From the context, I'm assuming this refers to a specific process or object within a larger system. I need to understand its lifecycle, especially concerning its release. So, what are the critical points I need to consider? Is this an atomic operation, or does it involve various asynchronous steps? The implications of a failed release, or a partial release, are important. How does the system handle error scenarios? I need to think about data consistency. Is there a defined rollback strategy if things go wrong during the release? The access control is also interesting, who is allowed to trigger a release? Are there security vulnerabilities I need to be aware of? I'll need to look at the logging. Is the release process sufficiently logged for auditing and debugging purposes? What metrics are being tracked related to this folio? And finally, if this process has an impact on external systems, I must fully analyze those implications. |
| 60/2 | 2026/03/16 15:23 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:metapage_release_folio] Results: map[SourceCode: 612: static bool metapage_release_folio(struct folio *folio, gfp_t gfp_mask)
613: {
614: struct metapage *mp;
615: bool ret = true;
616: int offset;
617:
618: for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
619: mp = folio_to_mp(folio, offset);
620:
621: if (!mp)
622: continue;
623:
624: jfs_info("metapage_release_folio: mp = 0x%p", mp);
625: if (mp->count || mp->nohomeok ||
626: test_bit(META_dirty, &mp->flag)) {
627: jfs_info("count = %ld, nohomeok = %d", mp->count,
628: mp->nohomeok);
629: ret = false;
630: continue;
631: }
632: if (mp->lsn)
633: remove_from_logsync(mp);
634: remove_metapage(folio, mp);
635: INCREMENT(mpStat.pagefree);
636: free_metapage(mp);
637: }
638: return ret;
639: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 61/2 | 2026/03/16 15:23 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 30697 output: 39 thoughts: 58 Thoughts: **Analyzing Folio Release Failure** Okay, so if `metapage_release_folio` comes back as false, that means the folio isn't being released. That's the key. Now, where are we calling `invalidate_folio`? I seem to recall it's called by either `truncate_cleanup_folio` or `filemap_free_folio`. Let's assume that it's likely called by `truncate_cleanup_folio`, as that sounds like a good place to start my investigation. Time to dive into `truncate_cleanup_folio` and see what's going on there. |
| 62/2 | 2026/03/16 15:23 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:mm/truncate.c IncludeLines:true Name:truncate_cleanup_folio] Results: map[SourceCode: 144: /*
145: * If truncate cannot remove the fs-private metadata from the page, the page
146: * becomes orphaned. It will be left on the LRU and may even be mapped into
147: * user pagetables if we're racing with filemap_fault().
148: *
149: * We need to bail out if page->mapping is no longer equal to the original
150: * mapping. This happens a) when the VM reclaimed the page while we waited on
151: * its lock, b) when a concurrent invalidate_mapping_pages got there first and
152: * c) when tmpfs swizzles a page between a tmpfs inode and swapper_space.
153: */
154: static void truncate_cleanup_folio(struct folio *folio)
155: {
156: if (folio_mapped(folio))
157: unmap_mapping_folio(folio);
158:
159: if (folio_needs_release(folio))
160: folio_invalidate(folio, 0, folio_size(folio));
161:
162: /*
163: * Some filesystems seem to re-dirty the page even after
164: * the VM has canceled the dirty bit (eg ext3 journaling).
165: * Hence dirty accounting check is placed after invalidation.
166: */
167: folio_cancel_dirty(folio);
168: }
SourceFile:mm/truncate.c] |
| 63/2 | 2026/03/16 15:23 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 31362 output: 39 thoughts: 8 Thoughts: Okay, here's my attempt at summarizing the thought process, as if it were my own, assuming the user is an expert. **My Analysis of `truncate_inode_folio`** Alright, so I'm looking at this `truncate_inode_folio` function. I see it's responsible for, essentially, shrinking a file's size on disk, right? I'm immediately thinking about the complexities involved. The core task is pretty straightforward: free up the allocated blocks that are now beyond the new, smaller file size. But the devil, as always, is in the details. First, I'm considering the folio aspect. Okay, we're operating at the folio level, which means we're dealing with page-aligned data. Makes sense, given the focus on efficient memory management within the kernel. I'm visualizing the data structures involved, the folio itself, the inode, the block mapping structures. I'm thinking about how the filesystem keeps track of which blocks are associated with which file, and how the truncation needs to update all that metadata consistently. I'm mentally stepping through the potential scenarios. What if the truncation happens in the middle of a folio? That's a classic case that requires careful handling. We need to zero out the portion of the folio that's being "released" from the file, to prevent data leakage. And we need to make sure that any dirty pages are written back to disk *before* freeing the associated blocks. Data integrity is paramount. Then there's the issue of concurrency. This function is likely called from multiple contexts – different processes truncating the same file, or from background tasks. So I'm immediately thinking about locking and synchronization. I'm looking for the spinlocks, mutexes, whatever mechanisms are in place to ensure that this operation is atomic and doesn't lead to race conditions or corruption. It's a critical section, no doubt. I'm checking for potential deadlocks, ensuring the locking order is sane and prevents starvation. I'm also considering the implications for caching. Truncation can invalidate cached data. We'll need to evict any cached folios that are now outside the new file size, and maybe even proactively discard cache entries for folios that are completely freed. That's a performance optimization, ensuring we don't hold onto stale data. The cache invalidation logic is going to be important to get right to avoid stale data issues. The interplay with block allocation is what I'm focused on right now. The freeing of the blocks is straightforward, I understand the block allocator and how it works. I'm mentally checking what block allocation strategies are used. Are there any special considerations for handling partially freed blocks or for coalescing adjacent free blocks? Is this operation going to impact the overall fragmentation of the file system? Finally, error handling. What happens if something goes wrong during the process? Disk errors, allocation failures, or synchronization issues? We need to ensure that the file system remains in a consistent state, even if there's a failure. That means transaction-like semantics – either the truncation completes successfully, or it's rolled back, leaving the file and file system as they were before the operation began. That's critical for data reliability. This is where I'm expecting some clever error recovery to prevent any data loss. So I'll be looking carefully at the error return codes and the handling of exceptional conditions. |
| 64/2 | 2026/03/16 15:24 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:mm/truncate.c IncludeLines:true Name:truncate_inode_folio] Results: map[SourceCode: 170: int truncate_inode_folio(struct address_space *mapping, struct folio *folio)
171: {
172: if (folio->mapping != mapping)
173: return -EIO;
174:
175: truncate_cleanup_folio(folio);
176: filemap_remove_folio(folio);
177: return 0;
178: }
SourceFile:mm/truncate.c] |
| 65/2 | 2026/03/16 15:24 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 32270 output: 41 thoughts: 134 Thoughts: **My Line of Reasoning on Page Cache Invalidation** Okay, so `truncate_inode_folio` initiates this whole process, which calls `truncate_cleanup_folio`. That, in turn, calls `folio_invalidate` and that ultimately calls `invalidate_folio`, right? And the subsequent step is calling `filemap_remove_folio(folio)`. `filemap_remove_folio` removes the folio from the page cache. And, this is the crux of it... it sets `folio->mapping` to NULL! Interesting. So, even if the underlying `invalidate_folio` operation failed to completely clean up the folio's private data – for example, if `metapage_release_folio` returned false – the folio gets *removed* from the page cache regardless! Wait a minute... I need to be sure I'm following the logic correctly. Is that really the case? Time to double-check the implementation of `filemap_remove_folio` itself. Need to verify that behavior. |
| 66/2 | 2026/03/16 15:25 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:mm/filemap.c IncludeLines:true Name:filemap_remove_folio] Results: map[SourceCode: 242: /**
243: * filemap_remove_folio - Remove folio from page cache.
244: * @folio: The folio.
245: *
246: * This must be called only on folios that are locked and have been
247: * verified to be in the page cache. It will never put the folio into
248: * the free list because the caller has a reference on the page.
249: */
250: void filemap_remove_folio(struct folio *folio)
251: {
252: struct address_space *mapping = folio->mapping;
253:
254: BUG_ON(!folio_test_locked(folio));
255: spin_lock(&mapping->host->i_lock);
256: xa_lock_irq(&mapping->i_pages);
257: __filemap_remove_folio(folio, NULL);
258: xa_unlock_irq(&mapping->i_pages);
259: if (mapping_shrinkable(mapping))
260: inode_lru_list_add(mapping->host);
261: spin_unlock(&mapping->host->i_lock);
262:
263: filemap_free_folio(mapping, folio);
264: }
SourceFile:mm/filemap.c] |
| 67/2 | 2026/03/16 15:25 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 33052 output: 41 thoughts: 9 Thoughts: Okay, here's my summary, in the first person, as an expert in the field would perceive the thought process: **Diving into `filemap_free_folio`** Right, so I'm looking at `filemap_free_folio`, and immediately I'm thinking about the implications. This is where we're releasing a `folio` that's part of a filemap. It’s a core operation for managing file-backed memory. I need to keep in mind the potential for page reclaim, I/O completion, and, of course, the ever-present race conditions. My first thought is always, "What's the context?" What's the state of the folio, the filemap, and the underlying filesystem? Are we dealing with dirty pages? Clean pages? Pages that are still in flight with I/O? The locking strategy here is critical. We can't have data corruption or even a use-after-free situation. I need to mentally trace the calls that lead here and understand all the other structures implicated in the freeing process, e.g. the folio, and potentially the radix tree it's part of. The `folio` structure itself is a minefield. I'm going through the different members of the folio as I see the code, considering the refcounts, the dirty bits, whether it's locked, and what's connected to it. I want to check for any obvious gotchas that could lead to memory leaks or deadlocks, such as, is the folio still mapped in any process address space? Are we properly clearing any associated metadata? Then I'm evaluating the possibility of race conditions. This is going to involve deep consideration of things like locks. A lot of operations on this code path are multi-threaded. So, I have to be particularly careful about all of the structures involved. Am I thinking of all the locks needed to protect this? Which locks are being acquired and in what order? What are the implications if a signal interrupts the process here? Does the code handle all the potential error paths correctly? Finally, what are the overall memory management implications? How does this freeing impact the global system memory counters, the page allocator, and the filesystem cache? Am I potentially causing a burst of I/O? What about reclaim? Is this a good point to start reclaiming memory? |
| 68/2 | 2026/03/16 15:26 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:mm/filemap.c IncludeLines:true Name:filemap_free_folio] Results: map[SourceCode: 231: void filemap_free_folio(struct address_space *mapping, struct folio *folio)
232: {
233: void (*free_folio)(struct folio *);
234:
235: free_folio = mapping->a_ops->free_folio;
236: if (free_folio)
237: free_folio(folio);
238:
239: folio_put_refs(folio, folio_nr_pages(folio));
240: }
SourceFile:mm/filemap.c] |
| 69/2 | 2026/03/16 15:26 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 33766 output: 42 thoughts: 23 Thoughts: Here's my thought process, as requested: **Navigating the `folio->mapping` Nullification** Okay, I'm digging into how folios are managed, and I'm trying to trace the lifecycle of a `folio` structure. My immediate question is: where is that crucial `folio->mapping` pointer being explicitly set to `NULL`? I know that nullification is a very common signal in the memory management world and usually signifies the relinquishing of ownership of the `folio`. I remember seeing this function, `__filemap_remove_folio`, in passing - that looks like a likely candidate, but I need to make sure. I'll need to go and confirm that this is where I would expect that this happens. That's the first place to investigate to understand the folio's lifecycle and ownership semantics. |
| 70/2 | 2026/03/16 15:27 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:mm/filemap.c IncludeLines:true Name:__filemap_remove_folio] Results: map[SourceCode: 217: /*
218: * Delete a page from the page cache and free it. Caller has to make
219: * sure the page is locked and that nobody else uses it - or that usage
220: * is safe. The caller must hold the i_pages lock.
221: */
222: void __filemap_remove_folio(struct folio *folio, void *shadow)
223: {
224: struct address_space *mapping = folio->mapping;
225:
226: trace_mm_filemap_delete_from_page_cache(folio);
227: filemap_unaccount_folio(mapping, folio);
228: page_cache_delete(mapping, folio, shadow);
229: }
SourceFile:mm/filemap.c] |
| 71/2 | 2026/03/16 15:27 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 34247 output: 40 thoughts: 8 Thoughts: Okay, here's my interpretation of that: **My Brain on Memory Management: A Flash of Inspiration (or Maybe Dread)** Right, let's see... the text mentions something like `page_cache_delete`. Okay, I know what that is. It immediately conjures up images of trying to keep the filesystem efficient. This command is probably called when we're trying to evict a page from the page cache. Essentially, this means we're kicking data out of that fast memory space and back to disk, or maybe even just discarding it if the data is already consistent with storage. My mind jumps to all the potential implications. Am I dealing with a file being closed? Data being overwritten? A memory crunch prompting the OS to reclaim space? What's the context here? Is it a single file, a whole directory, or is it a more systemic operation? I need more information to fully assess the situation. I need to be thinking about performance and the effects of our actions. What is going on in the kernel right now that is triggering `page_cache_delete`? Am I making a mistake? |
| 72/2 | 2026/03/16 15:27 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:mm/filemap.c IncludeLines:true Name:page_cache_delete] Results: map[SourceCode: 129: static void page_cache_delete(struct address_space *mapping,
130: struct folio *folio, void *shadow)
131: {
132: XA_STATE(xas, &mapping->i_pages, folio->index);
133: long nr = 1;
134:
135: mapping_set_update(&xas, mapping);
136:
137: xas_set_order(&xas, folio->index, folio_order(folio));
138: nr = folio_nr_pages(folio);
139:
140: VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
141:
142: xas_store(&xas, shadow);
143: xas_init_marks(&xas);
144:
145: folio->mapping = NULL;
146: /* Leave folio->index set: truncation lookup relies upon it */
147: mapping->nrpages -= nr;
148: }
SourceFile:mm/filemap.c] |
| 73/2 | 2026/03/16 15:27 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 34847 output: 42 thoughts: 467 Thoughts: **The Null Mapping Mystery: A Deep Dive into Folio Corruption**
Okay, so I've confirmed that `folio->mapping = NULL;` is set unconditionally. That's good, a clean slate after truncation. This means, if `truncate_inode_folio` runs, the folio is evicted from the page cache, and its mapping is, without a doubt, set to NULL.
But then, the *metapage*... it's the usual suspect isn't it? If `metapage_release_folio` *returns false*, the metapage itself *isn't* removed from the folio. And the metapage's pointer `mp->folio = folio` remains valid, pointing to the original folio.
Fast forward: we hit `txUnlock`, and `put_metapage(mp)` is called. That leads to `release_metapage(mp)`. Here's where it gets interesting, or rather, worrisome. Inside `release_metapage`, if the `META_dirty` flag is set, it calls `metapage_write_one(folio)`. Ultimately this leads to `metapage_write_folio(folio, &wbc)`. And there's the problem - we dereference `folio->mapping->host`, and *that* is NULL! The obvious conclusion is this will likely cause a kernel panic, or at the very least a read from invalid memory.
The core question nagging at me is: why is `release_metapage` even *trying* to write a potentially "dirty" metapage after the folio has been truncated? If the folio is gone, it shouldn't even *be* written back.
Let's dissect `release_metapage`:
```c
if (test_bit(META_dirty, &mp->flag)) {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
} else if (mp->lsn) /* discard_metapage doesn't remove it */
remove_from_logsync(mp);
```
If `folio->mapping` is NULL after truncation, what does `folio_mark_dirty` even *do*? Does it quietly do nothing, or does it try to do something with the now-invalid mapping and crash? I need to check `folio_mark_dirty`'s implementation to be certain, but I suspect it's going to be a problem. This is a potential data corruption bug, and I need to know the next step to prevent it.
|
| 74/2 | 2026/03/16 15:29 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:mm/page-writeback.c IncludeLines:true Name:folio_mark_dirty] Results: map[SourceCode:2744: /**
2745: * folio_mark_dirty - Mark a folio as being modified.
2746: * @folio: The folio.
2747: *
2748: * The folio may not be truncated while this function is running.
2749: * Holding the folio lock is sufficient to prevent truncation, but some
2750: * callers cannot acquire a sleeping lock. These callers instead hold
2751: * the page table lock for a page table which contains at least one page
2752: * in this folio. Truncation will block on the page table lock as it
2753: * unmaps pages before removing the folio from its mapping.
2754: *
2755: * Return: True if the folio was newly dirtied, false if it was already dirty.
2756: */
2757: bool folio_mark_dirty(struct folio *folio)
2758: {
2759: struct address_space *mapping = folio_mapping(folio);
2760:
2761: if (likely(mapping)) {
2762: /*
2763: * readahead/folio_deactivate could remain
2764: * PG_readahead/PG_reclaim due to race with folio_end_writeback
2765: * About readahead, if the folio is written, the flags would be
2766: * reset. So no problem.
2767: * About folio_deactivate, if the folio is redirtied,
2768: * the flag will be reset. So no problem. but if the
2769: * folio is used by readahead it will confuse readahead
2770: * and make it restart the size rampup process. But it's
2771: * a trivial problem.
2772: */
2773: if (folio_test_reclaim(folio))
2774: folio_clear_reclaim(folio);
2775: return mapping->a_ops->dirty_folio(mapping, folio);
2776: }
2777:
2778: return noop_dirty_folio(mapping, folio);
2779: }
SourceFile:mm/page-writeback.c] |
| 75/2 | 2026/03/16 15:29 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 36587 output: 27 thoughts: 447 Thoughts: **Debugging a Truncation Issue in JFS Metapage Handling** Okay, so here's what's happening. `folio_mark_dirty` *shouldn't* crash because it handles a `NULL` mapping gracefully, returning a no-op dirty folio. This sets the dirty bit. Then, `metapage_write_one` is called. That in turn calls `folio_clear_dirty_for_io`, which checks for a `NULL` mapping and, crucially, returns *true* if the folio *was* dirty. This is a problem! The crucial point is that since the dirty bit was set by `noop_dirty_folio`, `folio_clear_dirty_for_io` returns true when the mapping is NULL. Then `metapage_write_one` proceeds to call `metapage_write_folio(folio, &wbc)`. This is where it falls apart, because `metapage_write_folio` *assumes* the folio's mapping isn't `NULL` and tries to write the folio. Wait... why is `release_metapage` even *attempting* to write a metapage if the folio has been truncated? It should just discard it. I see that `release_metapage` doesn't check `folio->mapping` for `NULL`. It *should*! That `NULL` mapping is the flag that indicates the folio was truncated, and it shouldn't be written back. If the mapping is `NULL`, the folio has been truncated, and we absolutely do *not* want to write it. This leads to another question: why is the `META_dirty` bit set in the first place for a truncated folio? Aha! The metapage *was* modified, but then the file was truncated *before* it could be written back. That explains it. Okay, let's refocus. These are JFS *metapages*. They deal with *metadata*, not file data. So, the scenario becomes: Are metadata folios ever truncated? And, a quick sanity check, `jfs_metapage_aops` is used for `jfs_inode->i_mapping`? No, that's not right. It is used for block device mapping or the special metapage mapping? Time to check where `jfs_metapage_aops` is actually used. Need to see what the data actually looks like. |
| 76/2 | 2026/03/16 15:31 | tool | codesearch-find-references |
0mArgs:map[Name:jfs_metapage_aops] Results: map[References:<nil> TruncatedOutput:false] |
| 77/2 | 2026/03/16 15:31 | llm | debugger |
2mModel:gemini-3.1-pro-preview Tokens: input: 37603 output: 23 thoughts: 24 Thoughts: Here's my thought process, summarized as you requested: **Investigating JFS Metapage Operations** Okay, so I need to dive into some JFS internals. My focus is on understanding how the system handles metapage operations, specifically within the JFS file system code. I'll start by searching the `fs/jfs/` directory for the function or symbol `jfs_metapage_aops`. This is a pretty standard debugging approach – pinpointing a specific function name to narrow down the relevant code base. Hopefully, this function is central to my understanding, maybe it holds some keys to the data lifecycle. This should give me a good entry point to start tracing the execution and data flow around metapage management in JFS. Let's see what we can find. |
| 78/2 | 2026/03/16 15:34 | tool | grepper |
0mArgs:map[Expression:jfs_metapage_aops] Results: map[Output:fs/jfs/jfs_imap.c=418=struct inode *diReadSpecial(struct super_block *sb, ino_t inum, int secondary)
--
fs/jfs/jfs_imap.c-468-
fs/jfs/jfs_imap.c:469: ip->i_mapping->a_ops = &jfs_metapage_aops;
fs/jfs/jfs_imap.c-470- mapping_set_gfp_mask(ip->i_mapping, GFP_NOFS);
--
fs/jfs/jfs_metapage.c=664=static void metapage_invalidate_folio(struct folio *folio, size_t offset,
--
fs/jfs/jfs_metapage.c-673-
fs/jfs/jfs_metapage.c:674:const struct address_space_operations jfs_metapage_aops = {
fs/jfs/jfs_metapage.c-675- .read_folio = metapage_read_folio,
--
fs/jfs/jfs_metapage.h=122=static inline void metapage_homeok(struct metapage *mp)
--
fs/jfs/jfs_metapage.h-128-
fs/jfs/jfs_metapage.h:129:extern const struct address_space_operations jfs_metapage_aops;
fs/jfs/jfs_metapage.h-130-
--
fs/jfs/super.c=441=static int jfs_fill_super(struct super_block *sb, struct fs_context *fc)
--
fs/jfs/super.c-516- inode->i_size = bdev_nr_bytes(sb->s_bdev);
fs/jfs/super.c:517: inode->i_mapping->a_ops = &jfs_metapage_aops;
fs/jfs/super.c-518- inode_fake_hash(inode);
] |
| 79/2 | 2026/03/16 15:34 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 38343 output: 43 thoughts: 340 Thoughts: **Metapage Inode Truncation in JFS** Okay, so it looks like `jfs_metapage_aops` is the key for managing the `diReadSpecial` inodes, and it's tied to the `sb->s_bdev` mapping. Interesting. I recall now that `super.c:517` sets `inode->i_mapping->a_ops = &jfs_metapage_aops;` specifically for `sbi->direct_inode`. Right, `sbi->direct_inode` is *the* special inode used for metapages in JFS. The implication is, those folios aren't truncated during normal operations, they're only removed at unmount. But hold on, `diReadSpecial` also sets the `a_ops` to `&jfs_metapage_aops`! `diReadSpecial` handles `FILESYSTEM_I`, `AGGREGATE_I`, and those types of special inodes too. This raises the question: Are *those* special inodes ever truncated? Aha! Yes, they are. They get truncated when the filesystem is unmounted. I remember now that `iput` is called on these special inodes during the unmount process. `iput` calls `evict`, which then calls `truncate_inode_pages_final`. That function is what clears the page cache, removing all folios. Now, a potential issue comes to mind... if there are still dirty metapages lingering in the `tlock` list, could they get released later on? The question is, if we unmount the filesystem, `jfs_umount` (or potentially `jfs_put_super`) *should* flush all transactions and wait for them to complete before `iput`ting the special inodes. I think I'll need to double-check the code in `jfs_umount` and `jfs_put_super` to be absolutely certain. |
| 80/2 | 2026/03/16 15:35 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/super.c IncludeLines:true Name:jfs_put_super] Results: map[SourceCode: 185: static void jfs_put_super(struct super_block *sb)
186: {
187: struct jfs_sb_info *sbi = JFS_SBI(sb);
188: int rc;
189:
190: jfs_info("In jfs_put_super");
191:
192: jfs_quota_off_umount(sb);
193:
194: rc = jfs_umount(sb);
195: if (rc)
196: jfs_err("jfs_umount failed with return code %d", rc);
197:
198: unload_nls(sbi->nls_tab);
199:
200: truncate_inode_pages(sbi->direct_inode->i_mapping, 0);
201: iput(sbi->direct_inode);
202:
203: kfree(sbi);
204: }
SourceFile:fs/jfs/super.c] |
| 81/2 | 2026/03/16 15:35 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 39468 output: 46 thoughts: 81 Thoughts: Here's my summary, as though I'm thinking through this process: **Diving Deep into JFS Unmount** Okay, so I'm looking at `jfs_put_super`, and the first thing that jumps out is that `truncate_inode_pages(sbi->direct_inode->i_mapping, 0)` call. Right, that's supposed to clear the inode pages. But here's the kicker: if the `tlock` list *still* has metapages hanging around at this point, they'll get truncated by that operation! That feels a little… aggressive, and potentially problematic. Now, my initial thought is, shouldn't `jfs_umount` have taken care of flushing everything, including those metapages? It should, right? The whole point of unmounting is to ensure a clean shutdown, to get everything properly written to disk. So, I need to double-check `jfs_umount` to confirm that it's doing its job and emptying the transaction logs, the buffer cache and handling anything that might result in this issue. I need to make sure we're not inadvertently losing data or introducing corruption scenarios. Let's trace through `jfs_umount` and see what's actually going on. |
| 82/2 | 2026/03/16 15:37 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_umount.c IncludeLines:true Name:jfs_umount] Results: map[SourceCode: 26: /*
27: * NAME: jfs_umount(vfsp, flags, crp)
28: *
29: * FUNCTION: vfs_umount()
30: *
31: * PARAMETERS: vfsp - virtual file system pointer
32: * flags - unmount for shutdown
33: * crp - credential
34: *
35: * RETURN : EBUSY - device has open files
36: */
37: int jfs_umount(struct super_block *sb)
38: {
39: struct jfs_sb_info *sbi = JFS_SBI(sb);
40: struct inode *ipbmap = sbi->ipbmap;
41: struct inode *ipimap = sbi->ipimap;
42: struct inode *ipaimap = sbi->ipaimap;
43: struct inode *ipaimap2 = sbi->ipaimap2;
44: struct jfs_log *log;
45: int rc = 0;
46:
47: jfs_info("UnMount JFS: sb:0x%p", sb);
48:
49: /*
50: * update superblock and close log
51: *
52: * if mounted read-write and log based recovery was enabled
53: */
54: if ((log = sbi->log))
55: /*
56: * Wait for outstanding transactions to be written to log:
57: */
58: jfs_flush_journal(log, 2);
59:
60: /*
61: * close fileset inode allocation map (aka fileset inode)
62: */
63: diUnmount(ipimap, 0);
64:
65: diFreeSpecial(ipimap);
66: sbi->ipimap = NULL;
67:
68: /*
69: * close secondary aggregate inode allocation map
70: */
71: if (ipaimap2) {
72: diUnmount(ipaimap2, 0);
73: diFreeSpecial(ipaimap2);
74: sbi->ipaimap2 = NULL;
75: }
76:
77: /*
78: * close aggregate inode allocation map
79: */
80: diUnmount(ipaimap, 0);
81: diFreeSpecial(ipaimap);
82: sbi->ipaimap = NULL;
83:
84: /*
85: * close aggregate block allocation map
86: */
87: dbUnmount(ipbmap, 0);
88:
89: diFreeSpecial(ipbmap);
90: sbi->ipbmap = NULL;
91:
92: /*
93: * Make sure all metadata makes it to disk before we mark
94: * the superblock as clean
95: */
96: filemap_write_and_wait(sbi->direct_inode->i_mapping);
97:
98: /*
99: * ensure all file system file pages are propagated to their
100: * home blocks on disk (and their in-memory buffer pages are
101: * invalidated) BEFORE updating file system superblock state
102: * (to signify file system is unmounted cleanly, and thus in
103: * consistent state) and log superblock active file system
104: * list (to signify skip logredo()).
105: */
106: if (log) { /* log = NULL if read-only mount */
107: updateSuper(sb, FM_CLEAN);
108:
109: /*
110: * close log:
111: *
112: * remove file system from log active file system list.
113: */
114: rc = lmLogClose(sb);
115: }
116: jfs_info("UnMount JFS Complete: rc = %d", rc);
117: return rc;
118: }
SourceFile:fs/jfs/jfs_umount.c] |
| 83/2 | 2026/03/16 15:37 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 41076 output: 44 thoughts: 343 Thoughts: **Analyzing a Potential JFS Unmount Issue** Okay, so I'm looking at this crash trace and something's not quite right. My initial thought is, `jfs_umount` calls `filemap_write_and_wait(sbi->direct_inode->i_mapping)`. That's standard practice. But then I started thinking about the metapages. Are there any hanging around in the `tlock` list? It's a key point to consider. Then I see the `txUnlock` call in the trace. And that's called from `jfs_lazycommit`, which runs in a kthread. Hmm, interesting. The crash specifically involves `metapage_write_one`, `release_metapage`, and the `txUnlock` path, all the way back to the kthread. So `jfs_lazycommit` is clearly running asynchronously. The real question is: when the filesystem unmounts, does it correctly wait for this background commit thread to finish its work? Or are we seeing a race condition where the unmount process is preempting the completion of those writes? That's what I want to nail down. I need to look at `jfs_txnmgr.c` and specifically how `jfs_lazycommit` is stopped during an unmount. I need to understand the shutdown sequence and ensure that all those metapages are flushed before the filesystem is truly gone. Otherwise, we are dealing with a classic data corruption scenario on umount. This needs a deeper dive into the filesystem code's unmount handling. |
| 84/2 | 2026/03/16 15:38 | tool | codesearch-definition-source |
0mError:requested entity does not exist Args: map[ContextFile:fs/jfs/jfs_txnmgr.c IncludeLines:true Name:txQuit] Results: map[SourceCode: SourceFile:] |
| 85/2 | 2026/03/16 15:38 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 41820 output: 19 thoughts: 20 Thoughts: Okay, here's my summary, taking on the persona you requested: **Investigating `jfs_lazycommit` in the JFS Filesystem Code** Alright, let's dive into this. The objective is to understand how lazy commit works within the JFS filesystem, which means I need to start by pinpointing the relevant code. Given the context, `jfs_lazycommit` is clearly the target. Therefore, I will begin by searching for that exact string within the JFS directory structure. Specifically, I'll be looking in the `fs/jfs/` path of the kernel source. This search will be the foundation for figuring out how JFS handles delayed writes. I'm expecting this will likely lead to related functions, structures, and potentially, configurable parameters that drive the lazy commit behavior. I will be looking to get a feel for the underlying algorithm and how it interacts with other system components. |
| 86/2 | 2026/03/16 15:39 | tool | grepper |
0mArgs:map[Expression:jfs_lazycommit] Results: map[Output:fs/jfs/jfs_txnmgr.c=2651=static void txLazyCommit(struct tblock * tblk)
--
fs/jfs/jfs_txnmgr.c-2658- */
fs/jfs/jfs_txnmgr.c:2659: jfs_info("jfs_lazycommit: tblk 0x%p not unlocked", tblk);
fs/jfs/jfs_txnmgr.c-2660- yield();
--
fs/jfs/jfs_txnmgr.c-2692-/*
fs/jfs/jfs_txnmgr.c:2693: * jfs_lazycommit(void)
fs/jfs/jfs_txnmgr.c-2694- *
--
fs/jfs/jfs_txnmgr.c-2698- */
fs/jfs/jfs_txnmgr.c:2699:int jfs_lazycommit(void *arg)
fs/jfs/jfs_txnmgr.c-2700-{
--
fs/jfs/jfs_txnmgr.c-2767- if (!list_empty(&TxAnchor.unlock_queue))
fs/jfs/jfs_txnmgr.c:2768: jfs_err("jfs_lazycommit being killed w/pending transactions!");
fs/jfs/jfs_txnmgr.c-2769- else
fs/jfs/jfs_txnmgr.c:2770: jfs_info("jfs_lazycommit being killed");
fs/jfs/jfs_txnmgr.c-2771- return 0;
--
fs/jfs/jfs_txnmgr.h=295=extern void txLazyUnlock(struct tblock *);
fs/jfs/jfs_txnmgr.h:296:extern int jfs_lazycommit(void *);
fs/jfs/jfs_txnmgr.h-297-extern int jfs_sync(void *);
--
fs/jfs/super.c=951=static int __init init_jfs_fs(void)
--
fs/jfs/super.c-998- for (i = 0; i < commit_threads; i++) {
fs/jfs/super.c:999: jfsCommitThread[i] = kthread_run(jfs_lazycommit, NULL,
fs/jfs/super.c-1000- "jfsCommit");
] |
| 87/2 | 2026/03/16 15:39 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 42698 output: 47 thoughts: 62 Thoughts: **Analyzing the JFS Lazy Commit Mechanism** Okay, so I'm digging into this `jfs_lazycommit` function. My initial assumption was off – it's not a per-superblock thing, it’s a global kthread, or perhaps a pool of them. This changes things. Now I need to understand how it interacts with the commit process at a higher level. Specifically, if it's a global entity, it must be processing the `TxAnchor.unlock_queue`. That seems like the logical connection here. Given that this is the crux, let's dive deeper and thoroughly investigate the inner workings of `jfs_lazycommit`. I need to get a clear picture of its responsibility and where it sits in the broader transaction commitment flow. |
| 88/2 | 2026/03/16 15:40 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_txnmgr.c IncludeLines:true Name:jfs_lazycommit] Results: map[SourceCode:2692: /*
2693: * jfs_lazycommit(void)
2694: *
2695: * To be run as a kernel daemon. If lbmIODone is called in an interrupt
2696: * context, or where blocking is not wanted, this routine will process
2697: * committed transactions from the unlock queue.
2698: */
2699: int jfs_lazycommit(void *arg)
2700: {
2701: int WorkDone;
2702: struct tblock *tblk;
2703: unsigned long flags;
2704: struct jfs_sb_info *sbi;
2705:
2706: set_freezable();
2707: do {
2708: LAZY_LOCK(flags);
2709: jfs_commit_thread_waking = 0; /* OK to wake another thread */
2710: while (!list_empty(&TxAnchor.unlock_queue)) {
2711: WorkDone = 0;
2712: list_for_each_entry(tblk, &TxAnchor.unlock_queue,
2713: cqueue) {
2714:
2715: sbi = JFS_SBI(tblk->sb);
2716: /*
2717: * For each volume, the transactions must be
2718: * handled in order. If another commit thread
2719: * is handling a tblk for this superblock,
2720: * skip it
2721: */
2722: if (sbi->commit_state & IN_LAZYCOMMIT)
2723: continue;
2724:
2725: sbi->commit_state |= IN_LAZYCOMMIT;
2726: WorkDone = 1;
2727:
2728: /*
2729: * Remove transaction from queue
2730: */
2731: list_del(&tblk->cqueue);
2732:
2733: LAZY_UNLOCK(flags);
2734: txLazyCommit(tblk);
2735: LAZY_LOCK(flags);
2736:
2737: sbi->commit_state &= ~IN_LAZYCOMMIT;
2738: /*
2739: * Don't continue in the for loop. (We can't
2740: * anyway, it's unsafe!) We want to go back to
2741: * the beginning of the list.
2742: */
2743: break;
2744: }
2745:
2746: /* If there was nothing to do, don't continue */
2747: if (!WorkDone)
2748: break;
2749: }
2750: /* In case a wakeup came while all threads were active */
2751: jfs_commit_thread_waking = 0;
2752:
2753: if (freezing(current)) {
2754: LAZY_UNLOCK(flags);
2755: try_to_freeze();
2756: } else {
2757: DECLARE_WAITQUEUE(wq, current);
2758:
2759: add_wait_queue(&jfs_commit_thread_wait, &wq);
2760: set_current_state(TASK_INTERRUPTIBLE);
2761: LAZY_UNLOCK(flags);
2762: schedule();
2763: remove_wait_queue(&jfs_commit_thread_wait, &wq);
2764: }
2765: } while (!kthread_should_stop());
2766:
2767: if (!list_empty(&TxAnchor.unlock_queue))
2768: jfs_err("jfs_lazycommit being killed w/pending transactions!");
2769: else
2770: jfs_info("jfs_lazycommit being killed");
2771: return 0;
2772: }
SourceFile:fs/jfs/jfs_txnmgr.c] |
| 89/2 | 2026/03/16 15:40 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 44120 output: 45 thoughts: 40 Thoughts: Okay, here's my interpretation of that thought process, assuming I'm an expert in this area: **Deconstructing `jfs_lazycommit` and the Transaction Unlock Queue** Alright, let's break this down. My train of thought starts with `jfs_lazycommit`. My immediate question is: what is `jfs_lazycommit` actually doing with the `TxAnchor.unlock_queue`? It's a key part of its processing, obviously. The next crucial link to understand is: where are these transactions being added to `TxAnchor.unlock_queue`? The answer must be in `txLazyUnlock`. It's all about how and when those transactions get added to the queue to be "unlocked" later. That's a good starting point for further investigation. It will be necessary to investigate `txLazyUnlock`. |
| 90/2 | 2026/03/16 15:41 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_txnmgr.c IncludeLines:true Name:txLazyUnlock] Results: map[SourceCode:2774: void txLazyUnlock(struct tblock * tblk)
2775: {
2776: unsigned long flags;
2777:
2778: LAZY_LOCK(flags);
2779:
2780: list_add_tail(&tblk->cqueue, &TxAnchor.unlock_queue);
2781: /*
2782: * Don't wake up a commit thread if there is already one servicing
2783: * this superblock, or if the last one we woke up hasn't started yet.
2784: */
2785: if (!(JFS_SBI(tblk->sb)->commit_state & IN_LAZYCOMMIT) &&
2786: !jfs_commit_thread_waking) {
2787: jfs_commit_thread_waking = 1;
2788: wake_up(&jfs_commit_thread_wait);
2789: }
2790: LAZY_UNLOCK(flags);
2791: }
SourceFile:fs/jfs/jfs_txnmgr.c] |
| 91/2 | 2026/03/16 15:41 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 44698 output: 47 thoughts: 102 Thoughts: **Delving into Transaction Management During Unmount** Okay, so I'm tracing this `txLazyUnlock` call – it's triggered from `jfs_txnmgr.c` whenever a transaction hits its commit point. Now, a question arises: what's the behavior during an unmount? Specifically, if `jfs_umount` is called, does it synchronize with the `TxAnchor.unlock_queue` associated with the current superblock? I need to clarify if there's a wait mechanism involved to ensure the queue empties before proceeding. Let's go back and examine `jfs_umount` more closely. Ah, here it is, it calls `jfs_flush_journal(log, 2)`. This `jfs_flush_journal` call is the key, isn't it? I need to check how it behaves in order to understand how the journals behave. |
| 92/2 | 2026/03/16 15:42 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_logmgr.c IncludeLines:true Name:jfs_flush_journal] Results: map[SourceCode:1498: /*
1499: * NAME: jfs_flush_journal()
1500: *
1501: * FUNCTION: initiate write of any outstanding transactions to the journal
1502: * and optionally wait until they are all written to disk
1503: *
1504: * wait == 0 flush until latest txn is committed, don't wait
1505: * wait == 1 flush until latest txn is committed, wait
1506: * wait > 1 flush until all txn's are complete, wait
1507: */
1508: void jfs_flush_journal(struct jfs_log *log, int wait)
1509: {
1510: int i;
1511: struct tblock *target = NULL;
1512:
1513: /* jfs_write_inode may call us during read-only mount */
1514: if (!log)
1515: return;
1516:
1517: jfs_info("jfs_flush_journal: log:0x%p wait=%d", log, wait);
1518:
1519: LOGGC_LOCK(log);
1520:
1521: if (!list_empty(&log->cqueue)) {
1522: /*
1523: * This ensures that we will keep writing to the journal as long
1524: * as there are unwritten commit records
1525: */
1526: target = list_entry(log->cqueue.prev, struct tblock, cqueue);
1527:
1528: if (test_bit(log_FLUSH, &log->flag)) {
1529: /*
1530: * We're already flushing.
1531: * if flush_tblk is NULL, we are flushing everything,
1532: * so leave it that way. Otherwise, update it to the
1533: * latest transaction
1534: */
1535: if (log->flush_tblk)
1536: log->flush_tblk = target;
1537: } else {
1538: /* Only flush until latest transaction is committed */
1539: log->flush_tblk = target;
1540: set_bit(log_FLUSH, &log->flag);
1541:
1542: /*
1543: * Initiate I/O on outstanding transactions
1544: */
1545: if (!(log->cflag & logGC_PAGEOUT)) {
1546: log->cflag |= logGC_PAGEOUT;
1547: lmGCwrite(log, 0);
1548: }
1549: }
1550: }
1551: if ((wait > 1) || test_bit(log_SYNCBARRIER, &log->flag)) {
1552: /* Flush until all activity complete */
1553: set_bit(log_FLUSH, &log->flag);
1554: log->flush_tblk = NULL;
1555: }
1556:
1557: if (wait && target && !(target->flag & tblkGC_COMMITTED)) {
1558: DECLARE_WAITQUEUE(__wait, current);
1559:
1560: add_wait_queue(&target->gcwait, &__wait);
1561: set_current_state(TASK_UNINTERRUPTIBLE);
1562: LOGGC_UNLOCK(log);
1563: schedule();
1564: LOGGC_LOCK(log);
1565: remove_wait_queue(&target->gcwait, &__wait);
1566: }
1567: LOGGC_UNLOCK(log);
1568:
1569: if (wait < 2)
1570: return;
1571:
1572: write_special_inodes(log, filemap_fdatawrite);
1573:
1574: /*
1575: * If there was recent activity, we may need to wait
1576: * for the lazycommit thread to catch up
1577: */
1578: if ((!list_empty(&log->cqueue)) || !list_empty(&log->synclist)) {
1579: for (i = 0; i < 200; i++) { /* Too much? */
1580: msleep(250);
1581: write_special_inodes(log, filemap_fdatawrite);
1582: if (list_empty(&log->cqueue) &&
1583: list_empty(&log->synclist))
1584: break;
1585: }
1586: }
1587: assert(list_empty(&log->cqueue));
1588:
1589: #ifdef CONFIG_JFS_DEBUG
1590: if (!list_empty(&log->synclist)) {
1591: struct logsyncblk *lp;
1592:
1593: printk(KERN_ERR "jfs_flush_journal: synclist not empty\n");
1594: list_for_each_entry(lp, &log->synclist, synclist) {
1595: if (lp->xflag & COMMIT_PAGE) {
1596: struct metapage *mp = (struct metapage *)lp;
1597: print_hex_dump(KERN_ERR, "metapage: ",
1598: DUMP_PREFIX_ADDRESS, 16, 4,
1599: mp, sizeof(struct metapage), 0);
1600: print_hex_dump(KERN_ERR, "page: ",
1601: DUMP_PREFIX_ADDRESS, 16,
1602: sizeof(long), mp->folio,
1603: sizeof(struct page), 0);
1604: } else
1605: print_hex_dump(KERN_ERR, "tblock:",
1606: DUMP_PREFIX_ADDRESS, 16, 4,
1607: lp, sizeof(struct tblock), 0);
1608: }
1609: }
1610: #else
1611: WARN_ON(!list_empty(&log->synclist));
1612: #endif
1613: clear_bit(log_FLUSH, &log->flag);
1614: }
SourceFile:fs/jfs/jfs_logmgr.c] |
| 93/2 | 2026/03/16 15:42 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 46848 output: 43 thoughts: 426 Thoughts: **Analyzing a Potential JFS Crash** Okay, so I'm diving into this JFS crash scenario, trying to piece together the sequence of events. First, `jfs_flush_journal` is waiting for both `log->cqueue` and `log->synclist` to be empty. Right, that's the starting point. Then, I remember `txUnlock` is called from `txLazyCommit`. `txLazyCommit` is responsible for taking the `tblk` from `TxAnchor.unlock_queue` and *then* calls `txUnlock`, and `txUnlock` subsequently removes the `tblk` from `log->synclist`. So, if `jfs_flush_journal` returns successfully, that means `log->synclist` *is* empty. That in turn suggests that `txUnlock` has *finished* for *all* `tblk`s associated with this log. Okay, if `txUnlock` has finished, logically, `release_metapage` must have already been called for all metapages in the `tlock` list, right? Which should mean the crash *shouldn't* occur during `jfs_umount`. But the crash is happening in `jfs_lazycommit` - > `txLazyCommit` -> `txUnlock` -> `release_metapage` -> `metapage_write_one`. This implies that if `jfs_lazycommit` is running, then `txUnlock` must also be running. The key is figuring out *why* `folio->mapping` would be NULL, which is the crux of the issue. Could the folio be truncated *while* `txUnlock` is running? Yes, that's the critical connection! If a file is truncated or deleted, its folios are removed from the page cache. Aha, metapages in JFS, these are for critical metadata! Inodes, directories, allocation maps… Can directories actually be truncated? Yep, when a directory is deleted, its inode is evicted and its folios are truncated. That's a huge clue! Now, the question is what aops does it use? Do directory folios use `jfs_metapage_aops`? Nope, they use `jfs_aops`. Okay, good to know. `jfs_aops` *does* have `block_invalidate_folio` but directory blocks are still accessed via metapages! That requires more digging into `jfs_dir.c` and `jfs_dtree.c` to see how the truncation logic interacts with the metapage management. |
| 94/2 | 2026/03/16 15:43 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_dtree.c IncludeLines:true Name:dtSearch] Results: map[SourceCode: 557: /*
558: * dtSearch()
559: *
560: * function:
561: * Search for the entry with specified key
562: *
563: * parameter:
564: *
565: * return: 0 - search result on stack, leaf page pinned;
566: * errno - I/O error
567: */
568: int dtSearch(struct inode *ip, struct component_name * key, ino_t * data,
569: struct btstack * btstack, int flag)
570: {
571: int rc = 0;
572: int cmp = 1; /* init for empty page */
573: s64 bn;
574: struct metapage *mp;
575: dtpage_t *p;
576: s8 *stbl;
577: int base, index, lim;
578: struct btframe *btsp;
579: pxd_t *pxd;
580: int psize = 288; /* initial in-line directory */
581: ino_t inumber;
582: struct component_name ciKey;
583: struct super_block *sb = ip->i_sb;
584:
585: ciKey.name = kmalloc_array(JFS_NAME_MAX + 1, sizeof(wchar_t),
586: GFP_NOFS);
587: if (!ciKey.name) {
588: rc = -ENOMEM;
589: goto dtSearch_Exit2;
590: }
591:
592:
593: /* uppercase search key for c-i directory */
594: UniStrcpy(ciKey.name, key->name);
595: ciKey.namlen = key->namlen;
596:
597: /* only uppercase if case-insensitive support is on */
598: if ((JFS_SBI(sb)->mntflag & JFS_OS2) == JFS_OS2) {
599: ciToUpper(&ciKey);
600: }
601: BT_CLR(btstack); /* reset stack */
602:
603: /* init level count for max pages to split */
604: btstack->nsplit = 1;
605:
606: /*
607: * search down tree from root:
608: *
609: * between two consecutive entries of <Ki, Pi> and <Kj, Pj> of
610: * internal page, child page Pi contains entry with k, Ki <= K < Kj.
611: *
612: * if entry with search key K is not found
613: * internal page search find the entry with largest key Ki
614: * less than K which point to the child page to search;
615: * leaf page search find the entry with smallest key Kj
616: * greater than K so that the returned index is the position of
617: * the entry to be shifted right for insertion of new entry.
618: * for empty tree, search key is greater than any key of the tree.
619: *
620: * by convention, root bn = 0.
621: */
622: for (bn = 0;;) {
623: /* get/pin the page to search */
624: DT_GETPAGE(ip, bn, mp, psize, p, rc);
625: if (rc)
626: goto dtSearch_Exit1;
627:
628: /* get sorted entry table of the page */
629: stbl = DT_GETSTBL(p);
630:
631: /*
632: * binary search with search key K on the current page.
633: */
634: for (base = 0, lim = p->header.nextindex; lim; lim >>= 1) {
635: index = base + (lim >> 1);
636:
637: if (stbl[index] < 0) {
638: rc = -EIO;
639: goto out;
640: }
641:
642: if (p->header.flag & BT_LEAF) {
643: /* uppercase leaf name to compare */
644: cmp =
645: ciCompare(&ciKey, p, stbl[index],
646: JFS_SBI(sb)->mntflag);
647: } else {
648: /* router key is in uppercase */
649:
650: cmp = dtCompare(&ciKey, p, stbl[index]);
651:
652:
653: }
654: if (cmp == 0) {
655: /*
656: * search hit
657: */
658: /* search hit - leaf page:
659: * return the entry found
660: */
661: if (p->header.flag & BT_LEAF) {
662: inumber = le32_to_cpu(
663: ((struct ldtentry *) & p->slot[stbl[index]])->inumber);
664:
665: /*
666: * search for JFS_LOOKUP
667: */
668: if (flag == JFS_LOOKUP) {
669: *data = inumber;
670: rc = 0;
671: goto out;
672: }
673:
674: /*
675: * search for JFS_CREATE
676: */
677: if (flag == JFS_CREATE) {
678: *data = inumber;
679: rc = -EEXIST;
680: goto out;
681: }
682:
683: /*
684: * search for JFS_REMOVE or JFS_RENAME
685: */
686: if ((flag == JFS_REMOVE ||
687: flag == JFS_RENAME) &&
688: *data != inumber) {
689: rc = -ESTALE;
690: goto out;
691: }
692:
693: /*
694: * JFS_REMOVE|JFS_FINDDIR|JFS_RENAME
695: */
696: /* save search result */
697: *data = inumber;
698: btsp = btstack->top;
699: btsp->bn = bn;
700: btsp->index = index;
701: btsp->mp = mp;
702:
703: rc = 0;
704: goto dtSearch_Exit1;
705: }
706:
707: /* search hit - internal page:
708: * descend/search its child page
709: */
710: goto getChild;
711: }
712:
713: if (cmp > 0) {
714: base = index + 1;
715: --lim;
716: }
717: }
718:
719: /*
720: * search miss
721: *
722: * base is the smallest index with key (Kj) greater than
723: * search key (K) and may be zero or (maxindex + 1) index.
724: */
725: /*
726: * search miss - leaf page
727: *
728: * return location of entry (base) where new entry with
729: * search key K is to be inserted.
730: */
731: if (p->header.flag & BT_LEAF) {
732: /*
733: * search for JFS_LOOKUP, JFS_REMOVE, or JFS_RENAME
734: */
735: if (flag == JFS_LOOKUP || flag == JFS_REMOVE ||
736: flag == JFS_RENAME) {
737: rc = -ENOENT;
738: goto out;
739: }
740:
741: /*
742: * search for JFS_CREATE|JFS_FINDDIR:
743: *
744: * save search result
745: */
746: *data = 0;
747: btsp = btstack->top;
748: btsp->bn = bn;
749: btsp->index = base;
750: btsp->mp = mp;
751:
752: rc = 0;
753: goto dtSearch_Exit1;
754: }
755:
756: /*
757: * search miss - internal page
758: *
759: * if base is non-zero, decrement base by one to get the parent
760: * entry of the child page to search.
761: */
762: index = base ? base - 1 : base;
763:
764: /*
765: * go down to child page
766: */
767: getChild:
768: /* update max. number of pages to split */
769: if (BT_STACK_FULL(btstack)) {
770: /* Something's corrupted, mark filesystem dirty so
771: * chkdsk will fix it.
772: */
773: jfs_error(sb, "stack overrun!\n");
774: BT_STACK_DUMP(btstack);
775: rc = -EIO;
776: goto out;
777: }
778: btstack->nsplit++;
779:
780: /* push (bn, index) of the parent page/entry */
781: BT_PUSH(btstack, bn, index);
782:
783: /* get the child page block number */
784: pxd = (pxd_t *) & p->slot[stbl[index]];
785: bn = addressPXD(pxd);
786: psize = lengthPXD(pxd) << JFS_SBI(ip->i_sb)->l2bsize;
787:
788: /* unpin the parent page */
789: DT_PUTPAGE(mp);
790: }
791:
792: out:
793: DT_PUTPAGE(mp);
794:
795: dtSearch_Exit1:
796:
797: kfree(ciKey.name);
798:
799: dtSearch_Exit2:
800:
801: return rc;
802: }
SourceFile:fs/jfs/jfs_dtree.c] |
| 95/2 | 2026/03/16 15:43 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 51139 output: 47 thoughts: 50 Thoughts: Here's my summary, as the thought process unfolds: **Digging into DT_GETPAGE and read_metapage** Okay, hold on a second. I'm following this path, and it looks like `DT_GETPAGE` actually calls `read_metapage`. That makes sense in the context of how we're caching things. Now, let me think... `read_metapage`... yeah, right, that's where things get interesting because it utilizes `sbi->direct_inode->i_mapping`. That `i_mapping` field is the core of how the filesystem handles the underlying block device to cache its metadata. Let's trace into `read_metapage`, I need to refresh my memory on the interactions. |
| 96/2 | 2026/03/16 15:44 | tool | codesearch-definition-source |
0mError:requested entity does not exist Args: map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:read_metapage] Results: map[SourceCode: SourceFile:] |
| 97/2 | 2026/03/16 15:44 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 51420 output: 48 thoughts: 12 Thoughts: Okay, here's my take on those thoughts, tailored for an expert audience, presented in the first person: **Navigating the Metapage Retrieval Process** Alright, time to dive in. My immediate thought is, "Let's check `__get_metapage`." That's the core of how this system retrieves metadata, and it's where the critical logic resides. Before I even think about broader implications, scalability, or potential optimizations, I need to understand the fundamental retrieval mechanism. I'm going straight to the source. I need to understand how it's implemented. I'm wondering, how is it handling errors? What caching strategies (if any) are in place? And most importantly, what's its access pattern to the data source? What kind of optimizations did the original author implement? Are they still optimal given the current scale of the data? |
| 98/2 | 2026/03/16 15:46 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:__get_metapage] Results: map[SourceCode: 683: struct metapage *__get_metapage(struct inode *inode, unsigned long lblock,
684: unsigned int size, int absolute,
685: unsigned long new)
686: {
687: int l2BlocksPerPage;
688: int l2bsize;
689: struct address_space *mapping;
690: struct metapage *mp = NULL;
691: struct folio *folio;
692: unsigned long page_index;
693: unsigned long page_offset;
694:
695: jfs_info("__get_metapage: ino = %ld, lblock = 0x%lx, abs=%d",
696: inode->i_ino, lblock, absolute);
697:
698: l2bsize = inode->i_blkbits;
699: l2BlocksPerPage = PAGE_SHIFT - l2bsize;
700: page_index = lblock >> l2BlocksPerPage;
701: page_offset = (lblock - (page_index << l2BlocksPerPage)) << l2bsize;
702: if ((page_offset + size) > PAGE_SIZE) {
703: jfs_err("MetaData crosses page boundary!!");
704: jfs_err("lblock = %lx, size = %d", lblock, size);
705: dump_stack();
706: return NULL;
707: }
708: if (absolute)
709: mapping = JFS_SBI(inode->i_sb)->direct_inode->i_mapping;
710: else {
711: /*
712: * If an nfs client tries to read an inode that is larger
713: * than any existing inodes, we may try to read past the
714: * end of the inode map
715: */
716: if ((lblock << inode->i_blkbits) >= inode->i_size)
717: return NULL;
718: mapping = inode->i_mapping;
719: }
720:
721: if (new && (PSIZE == PAGE_SIZE)) {
722: folio = filemap_grab_folio(mapping, page_index);
723: if (IS_ERR(folio)) {
724: jfs_err("filemap_grab_folio failed!");
725: return NULL;
726: }
727: folio_mark_uptodate(folio);
728: } else {
729: folio = read_mapping_folio(mapping, page_index, NULL);
730: if (IS_ERR(folio)) {
731: jfs_err("read_mapping_page failed!");
732: return NULL;
733: }
734: folio_lock(folio);
735: }
736:
737: mp = folio_to_mp(folio, page_offset);
738: if (mp) {
739: if (mp->logical_size != size) {
740: jfs_error(inode->i_sb,
741: "get_mp->logical_size != size\n");
742: jfs_err("logical_size = %d, size = %d",
743: mp->logical_size, size);
744: dump_stack();
745: goto unlock;
746: }
747: mp->count++;
748: lock_metapage(mp);
749: if (test_bit(META_discard, &mp->flag)) {
750: if (!new) {
751: jfs_error(inode->i_sb,
752: "using a discarded metapage\n");
753: discard_metapage(mp);
754: goto unlock;
755: }
756: clear_bit(META_discard, &mp->flag);
757: }
758: } else {
759: INCREMENT(mpStat.pagealloc);
760: mp = alloc_metapage(GFP_NOFS);
761: if (!mp)
762: goto unlock;
763: mp->folio = folio;
764: mp->sb = inode->i_sb;
765: mp->flag = 0;
766: mp->xflag = COMMIT_PAGE;
767: mp->count = 1;
768: mp->nohomeok = 0;
769: mp->logical_size = size;
770: mp->data = folio_address(folio) + page_offset;
771: mp->index = lblock;
772: if (unlikely(insert_metapage(folio, mp))) {
773: free_metapage(mp);
774: goto unlock;
775: }
776: lock_metapage(mp);
777: }
778:
779: if (new) {
780: jfs_info("zeroing mp = 0x%p", mp);
781: memset(mp->data, 0, PSIZE);
782: }
783:
784: folio_unlock(folio);
785: jfs_info("__get_metapage: returning = 0x%p data = 0x%p", mp, mp->data);
786: return mp;
787:
788: unlock:
789: folio_unlock(folio);
790: return NULL;
791: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 99/2 | 2026/03/16 15:46 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 53307 output: 45 thoughts: 150 Thoughts: **Metapage Mapping Insights** Okay, so I'm digging into how these metapages are handled, and it looks like `__get_metapage` is conditional. If `absolute` is true, we're using `JFS_SBI(inode->i_sb)->direct_inode->i_mapping`, which makes sense for, well, direct inode-related things. But, if `absolute` is false, it's using just `inode->i_mapping`. That's interesting… so the metapages *can* be attached directly to the `inode->i_mapping`! Wait a minute, that opens a potential can of worms. If the metapages are tied to `inode->i_mapping` and that inode gets truncated or deleted, those folios are getting yanked out of the page cache! That's... not ideal in a lot of scenarios. And hang on, `inode->i_mapping` isn't just for regular files, it can be a directory inode as well. I should probably confirm how this works. I wonder if `absolute` is ever false for directories? Let me check how this is used, let's see, it looks like `DT_GETPAGE` uses this for the `dtSearch` functionality. I should check that path. |
| 100/2 | 2026/03/16 15:47 | tool | codesearch-definition-source |
0mError:requested entity does not exist Args: map[ContextFile:fs/jfs/jfs_dtree.c IncludeLines:true Name:DT_GETPAGE] Results: map[SourceCode: SourceFile:] |
| 101/2 | 2026/03/16 15:47 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 53799 output: 18 thoughts: 19 Thoughts: Here's my thought process, summarized as you requested: **Navigating the JFS Filesystem for `DT_GETPAGE`** Okay, so I need to find where the `DT_GETPAGE` functionality is handled within the JFS filesystem code. My immediate instinct is to start within the core `fs/jfs/` directory. That seems like the logical starting point, considering JFS is the primary target. I'll need to use a targeted search term, which in this case will be `DT_GETPAGE`. Given this, I can start looking for the definition, or uses of `DT_GETPAGE`. My goal is to determine how data page requests are handled. From there, I can determine where I am most interested in researching. |
| 102/2 | 2026/03/16 15:49 | tool | grepper |
0mArgs:map[Expression:DT_GETPAGE] Results: map[Output:fs/jfs/jfs_dtree.c=102=struct dtsplit {
--
fs/jfs/jfs_dtree.c-113-/* get page buffer for specified block address */
fs/jfs/jfs_dtree.c:114:#define DT_GETPAGE(IP, BN, MP, SIZE, P, RC) \
fs/jfs/jfs_dtree.c-115-do { \
--
fs/jfs/jfs_dtree.c-123- jfs_error((IP)->i_sb, \
fs/jfs/jfs_dtree.c:124: "DT_GETPAGE: dtree page corrupt\n"); \
fs/jfs/jfs_dtree.c-125- MP = NULL; \
--
fs/jfs/jfs_dtree.c=568=int dtSearch(struct inode *ip, struct component_name * key, ino_t * data,
--
fs/jfs/jfs_dtree.c-623- /* get/pin the page to search */
fs/jfs/jfs_dtree.c:624: DT_GETPAGE(ip, bn, mp, psize, p, rc);
fs/jfs/jfs_dtree.c-625- if (rc)
--
fs/jfs/jfs_dtree.c=926=static int dtSplitUp(tid_t tid,
--
fs/jfs/jfs_dtree.c-1133- /* get the parent page <sp> */
fs/jfs/jfs_dtree.c:1134: DT_GETPAGE(ip, parent->bn, smp, PSIZE, sp, rc);
fs/jfs/jfs_dtree.c-1135- if (rc) {
--
fs/jfs/jfs_dtree.c=1326=static int dtSplitPage(tid_t tid, struct inode *ip, struct dtsplit * split,
--
fs/jfs/jfs_dtree.c-1470- if (nextbn != 0) {
fs/jfs/jfs_dtree.c:1471: DT_GETPAGE(ip, nextbn, mp, PSIZE, p, rc);
fs/jfs/jfs_dtree.c-1472- if (rc) {
--
fs/jfs/jfs_dtree.c=1638=static int dtExtendPage(tid_t tid,
--
fs/jfs/jfs_dtree.c-1668- parent = BT_POP(btstack);
fs/jfs/jfs_dtree.c:1669: DT_GETPAGE(ip, parent->bn, pmp, PSIZE, pp, rc);
fs/jfs/jfs_dtree.c-1670- if (rc)
--
fs/jfs/jfs_dtree.c=2078=int dtDelete(tid_t tid,
--
fs/jfs/jfs_dtree.c-2125- /* Read next leaf page */
fs/jfs/jfs_dtree.c:2126: DT_GETPAGE(ip, le64_to_cpu(p->header.next),
fs/jfs/jfs_dtree.c-2127- nmp, PSIZE, np, rc);
--
fs/jfs/jfs_dtree.c=2233=static int dtDeleteUp(tid_t tid, struct inode *ip,
--
fs/jfs/jfs_dtree.c-2303- /* pin the parent page <sp> */
fs/jfs/jfs_dtree.c:2304: DT_GETPAGE(ip, parent->bn, mp, PSIZE, p, rc);
fs/jfs/jfs_dtree.c-2305- if (rc)
--
fs/jfs/jfs_dtree.c=2445=static int dtRelink(tid_t tid, struct inode *ip, dtpage_t * p)
--
fs/jfs/jfs_dtree.c-2458- if (nextbn != 0) {
fs/jfs/jfs_dtree.c:2459: DT_GETPAGE(ip, nextbn, mp, PSIZE, p, rc);
fs/jfs/jfs_dtree.c-2460- if (rc)
--
fs/jfs/jfs_dtree.c-2487- if (prevbn != 0) {
fs/jfs/jfs_dtree.c:2488: DT_GETPAGE(ip, prevbn, mp, PSIZE, p, rc);
fs/jfs/jfs_dtree.c-2489- if (rc)
--
fs/jfs/jfs_dtree.c=2616=static int add_missing_indices(struct inode *inode, s64 bn)
--
fs/jfs/jfs_dtree.c-2631-
fs/jfs/jfs_dtree.c:2632: DT_GETPAGE(inode, bn, mp, PSIZE, p, rc);
fs/jfs/jfs_dtree.c-2633-
fs/jfs/jfs_dtree.c-2634- if (rc) {
fs/jfs/jfs_dtree.c:2635: printk(KERN_ERR "DT_GETPAGE failed!\n");
fs/jfs/jfs_dtree.c-2636- goto end;
--
fs/jfs/jfs_dtree.c=2714=int jfs_readdir(struct file *file, struct dir_context *ctx)
--
fs/jfs/jfs_dtree.c-2796- index = dirtab_slot.slot;
fs/jfs/jfs_dtree.c:2797: DT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
fs/jfs/jfs_dtree.c-2798- if (rc) {
--
fs/jfs/jfs_dtree.c-3040-
fs/jfs/jfs_dtree.c:3041: DT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
fs/jfs/jfs_dtree.c-3042- if (rc) {
--
fs/jfs/jfs_dtree.c=3060=static int dtReadFirst(struct inode *ip, struct btstack * btstack)
--
fs/jfs/jfs_dtree.c-3078- for (bn = 0;;) {
fs/jfs/jfs_dtree.c:3079: DT_GETPAGE(ip, bn, mp, psize, p, rc);
fs/jfs/jfs_dtree.c-3080- if (rc)
--
fs/jfs/jfs_dtree.c=3139=static int dtReadNext(struct inode *ip, loff_t * offset,
--
fs/jfs/jfs_dtree.c-3222- bn = parent->bn;
fs/jfs/jfs_dtree.c:3223: DT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
fs/jfs/jfs_dtree.c-3224- if (rc)
--
fs/jfs/jfs_dtree.c-3243- /* get next parent page */
fs/jfs/jfs_dtree.c:3244: DT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
fs/jfs/jfs_dtree.c-3245- if (rc)
--
fs/jfs/jfs_dtree.c-3263- c:
fs/jfs/jfs_dtree.c:3264: DT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
fs/jfs/jfs_dtree.c-3265- if (rc)
--
fs/jfs/jfs_dtree.c-3284- /* get next leaf page */
fs/jfs/jfs_dtree.c:3285: DT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
fs/jfs/jfs_dtree.c-3286- if (rc)
] |
| 103/2 | 2026/03/16 15:49 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 56176 output: 45 thoughts: 13 Thoughts: Okay, here's my take on those thoughts, tailored for an expert audience: **Diving into the `DT_GETPAGE` Macro** Right, let's get down to brass tacks. I need to refresh my memory on the `DT_GETPAGE` macro definition. It's probably been a while since I last meticulously examined the intricacies of this particular piece of the puzzle. I know it's fundamental to understanding memory management, but given the breadth of my work, I want to make sure I have an accurate understanding of the nuances of its implementation. I'll need to double check the exact syntax and purpose to make sure I'm following my current logic as I troubleshoot this issue. |
| 104/2 | 2026/03/16 15:51 | tool | codesearch-definition-source |
0mError:requested entity does not exist Args: map[ContextFile:fs/jfs/jfs_dtree.c IncludeLines:true Name:DT_GETPAGE] Results: map[SourceCode: SourceFile:] |
| 105/2 | 2026/03/16 15:51 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 56403 output: 27 thoughts: 37 Thoughts: Okay, here we go. **Analyzing a Macro in `jfs_dtree.c`** Alright, this code block is hitting me as a macro, right off the bat. Let's dig in and see what's what. I'm going to jump straight to lines 113 through 125 within the `fs/jfs/jfs_dtree.c` file. That's usually the best place to start when trying to understand how a macro expands. From there, I'll be looking for the crucial information about how the macro handles its inputs, and what it expands into. I'm focusing on the logic it's implementing, the data structures it manipulates, and any possible areas for optimization or potential pitfalls that might be present in the logic of the code. This ought to give me a good handle on its purpose. |
| 106/2 | 2026/03/16 15:51 | tool | read-file |
0mArgs:map[File:fs/jfs/jfs_dtree.c] Results: map[Contents:// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright (C) International Business Machines Corp., 2000-2004
*/
/*
* jfs_dtree.c: directory B+-tree manager
*
* B+-tree with variable length key directory:
*
* each directory page is structured as an array of 32-byte
* directory entry slots initialized as a freelist
* to avoid search/compaction of free space at insertion.
* when an entry is inserted, a number of slots are allocated
* from the freelist as required to store variable length data
* of the entry; when the entry is deleted, slots of the entry
* are returned to freelist.
*
* leaf entry stores full name as key and file serial number
* (aka inode number) as data.
* internal/router entry stores sufffix compressed name
* as key and simple extent descriptor as data.
*
* each directory page maintains a sorted entry index table
* which stores the start slot index of sorted entries
* to allow binary search on the table.
*
* directory starts as a root/leaf page in on-disk inode
* inline data area.
* when it becomes full, it starts a leaf of a external extent
* of length of 1 block. each time the first leaf becomes full,
* it is extended rather than split (its size is doubled),
* until its length becoms 4 KBytes, from then the extent is split
* with new 4 Kbyte extent when it becomes full
* to reduce external fragmentation of small directories.
*
* blah, blah, blah, for linear scan of directory in pieces by
* readdir().
*
*
* case-insensitive directory file system
*
* names are stored in case-sensitive way in leaf entry.
* but stored, searched and compared in case-insensitive (uppercase) order
* (i.e., both search key and entry key are folded for search/compare):
* (note that case-sensitive order is BROKEN in storage, e.g.,
* sensitive: Ad, aB, aC, aD -> insensitive: aB, aC, aD, Ad
*
* entries which folds to the same key makes up a equivalent class
* whose members are stored as contiguous cluster (may cross page boundary)
* but whose order is arbitrary and acts as duplicate, e.g.,
* abc, Abc, aBc, abC)
*
* once match is found at leaf, requires scan forward/backward
* either for, in case-insensitive search, duplicate
* or for, in case-sensitive search, for exact match
*
* router entry must be created/stored in case-insensitive way
* in internal entry:
* (right most key of left page and left most key of right page
* are folded, and its suffix compression is propagated as router
* key in parent)
* (e.g., if split occurs <abc> and <aBd>, <ABD> trather than <aB>
* should be made the router key for the split)
*
* case-insensitive search:
*
* fold search key;
*
* case-insensitive search of B-tree:
* for internal entry, router key is already folded;
* for leaf entry, fold the entry key before comparison.
*
* if (leaf entry case-insensitive match found)
* if (next entry satisfies case-insensitive match)
* return EDUPLICATE;
* if (prev entry satisfies case-insensitive match)
* return EDUPLICATE;
* return match;
* else
* return no match;
*
* serialization:
* target directory inode lock is being held on entry/exit
* of all main directory service routines.
*
* log based recovery:
*/
#include <linux/fs.h>
#include <linux/quotaops.h>
#include <linux/slab.h>
#include "jfs_incore.h"
#include "jfs_superblock.h"
#include "jfs_filsys.h"
#include "jfs_metapage.h"
#include "jfs_dmap.h"
#include "jfs_unicode.h"
#include "jfs_debug.h"
/* dtree split parameter */
struct dtsplit {
struct metapage *mp;
s16 index;
s16 nslot;
struct component_name *key;
ddata_t *data;
struct pxdlist *pxdlist;
};
#define DT_PAGE(IP, MP) BT_PAGE(IP, MP, dtpage_t, i_dtroot)
/* get page buffer for specified block address */
#define DT_GETPAGE(IP, BN, MP, SIZE, P, RC) \
do { \
BT_GETPAGE(IP, BN, MP, dtpage_t, SIZE, P, RC, i_dtroot); \
if (!(RC)) { \
if (((P)->header.nextindex > \
(((BN) == 0) ? DTROOTMAXSLOT : (P)->header.maxslot)) || \
((BN) && (((P)->header.maxslot > DTPAGEMAXSLOT) || \
((P)->header.stblindex >= DTPAGEMAXSLOT)))) { \
BT_PUTPAGE(MP); \
jfs_error((IP)->i_sb, \
"DT_GETPAGE: dtree page corrupt\n"); \
MP = NULL; \
RC = -EIO; \
} \
} \
} while (0)
/* for consistency */
#define DT_PUTPAGE(MP) BT_PUTPAGE(MP)
#define DT_GETSEARCH(IP, LEAF, BN, MP, P, INDEX) \
BT_GETSEARCH(IP, LEAF, BN, MP, dtpage_t, P, INDEX, i_dtroot)
/*
* forward references
*/
static int dtSplitUp(tid_t tid, struct inode *ip,
struct dtsplit * split, struct btstack * btstack);
static int dtSplitPage(tid_t tid, struct inode *ip, struct dtsplit * split,
struct metapage ** rmpp, dtpage_t ** rpp, pxd_t * rxdp);
static int dtExtendPage(tid_t tid, struct inode *ip,
struct dtsplit * split, struct btstack * btstack);
static int dtSplitRoot(tid_t tid, struct inode *ip,
struct dtsplit * split, struct metapage ** rmpp);
static int dtDeleteUp(tid_t tid, struct inode *ip, struct metapage * fmp,
dtpage_t * fp, struct btstack * btstack);
static int dtRelink(tid_t tid, struct inode *ip, dtpage_t * p);
static int dtReadFirst(struct inode *ip, struct btstack * btstack);
static int dtReadNext(struct inode *ip,
loff_t * offset, struct btstack * btstack);
static int dtCompare(struct component_name * key, dtpage_t * p, int si);
static int ciCompare(struct component_name * key, dtpage_t * p, int si,
int flag);
static void dtGetKey(dtpage_t * p, int i, struct component_name * key,
int flag);
static int ciGetLeafPrefixKey(dtpage_t * lp, int li, dtpage_t * rp,
int ri, struct component_name * key, int flag);
static void dtInsertEntry(dtpage_t * p, int index, struct component_name * key,
ddata_t * data, struct dt_lock **);
static void dtMoveEntry(dtpage_t * sp, int si, dtpage_t * dp,
struct dt_lock ** sdtlock, struct dt_lock ** ddtlock,
int do_index);
static void dtDeleteEntry(dtpage_t * p, int fi, struct dt_lock ** dtlock);
static void dtTruncateEntry(dtpage_t * p, int ti, struct dt_lock ** dtlock);
static void dtLinelockFreelist(dtpage_t * p, int m, struct dt_lock ** dtlock);
#define ciToUpper(c) UniStrupr((c)->name)
/*
* read_index_page()
*
* Reads a page of a directory's index table.
* Having metadata mapped into the directory inode's address space
* presents a multitude of problems. We avoid this by mapping to
* the absolute address space outside of the *_metapage routines
*/
static struct metapage *read_index_page(struct inode *inode, s64 blkno)
{
int rc;
s64 xaddr;
int xflag;
s32 xlen;
rc = xtLookup(inode, blkno, 1, &xflag, &xaddr, &xlen, 1);
if (rc || (xaddr == 0))
return NULL;
return read_metapage(inode, xaddr, PSIZE, 1);
}
/*
* get_index_page()
*
* Same as get_index_page(), but get's a new page without reading
*/
static struct metapage *get_index_page(struct inode *inode, s64 blkno)
{
int rc;
s64 xaddr;
int xflag;
s32 xlen;
rc = xtLookup(inode, blkno, 1, &xflag, &xaddr, &xlen, 1);
if (rc || (xaddr == 0))
return NULL;
return get_metapage(inode, xaddr, PSIZE, 1);
}
/*
* find_index()
*
* Returns dtree page containing directory table entry for specified
* index and pointer to its entry.
*
* mp must be released by caller.
*/
static struct dir_table_slot *find_index(struct inode *ip, u32 index,
struct metapage ** mp, s64 *lblock)
{
struct jfs_inode_info *jfs_ip = JFS_IP(ip);
s64 blkno;
s64 offset;
int page_offset;
struct dir_table_slot *slot;
static int maxWarnings = 10;
if (index < 2) {
if (maxWarnings) {
jfs_warn("find_entry called with index = %d", index);
maxWarnings--;
}
return NULL;
}
if (index >= jfs_ip->next_index) {
jfs_warn("find_entry called with index >= next_index");
return NULL;
}
if (jfs_dirtable_inline(ip)) {
/*
* Inline directory table
*/
*mp = NULL;
slot = &jfs_ip->i_dirtable[index - 2];
} else {
offset = (index - 2) * sizeof(struct dir_table_slot);
page_offset = offset & (PSIZE - 1);
blkno = ((offset + 1) >> L2PSIZE) <<
JFS_SBI(ip->i_sb)->l2nbperpage;
if (*mp && (*lblock != blkno)) {
release_metapage(*mp);
*mp = NULL;
}
if (!(*mp)) {
*lblock = blkno;
*mp = read_index_page(ip, blkno);
}
if (!(*mp)) {
jfs_err("free_index: error reading directory table");
return NULL;
}
slot =
(struct dir_table_slot *) ((char *) (*mp)->data +
page_offset);
}
return slot;
}
static inline void lock_index(tid_t tid, struct inode *ip, struct metapage * mp,
u32 index)
{
struct tlock *tlck;
struct linelock *llck;
struct lv *lv;
tlck = txLock(tid, ip, mp, tlckDATA);
llck = (struct linelock *) tlck->lock;
if (llck->index >= llck->maxcnt)
llck = txLinelock(llck);
lv = &llck->lv[llck->index];
/*
* Linelock slot size is twice the size of directory table
* slot size. 512 entries per page.
*/
lv->offset = ((index - 2) & 511) >> 1;
lv->length = 1;
llck->index++;
}
/*
* add_index()
*
* Adds an entry to the directory index table. This is used to provide
* each directory entry with a persistent index in which to resume
* directory traversals
*/
static u32 add_index(tid_t tid, struct inode *ip, s64 bn, int slot)
{
struct super_block *sb = ip->i_sb;
struct jfs_sb_info *sbi = JFS_SBI(sb);
struct jfs_inode_info *jfs_ip = JFS_IP(ip);
u64 blkno;
struct dir_table_slot *dirtab_slot;
u32 index;
struct linelock *llck;
struct lv *lv;
struct metapage *mp;
s64 offset;
uint page_offset;
struct tlock *tlck;
s64 xaddr;
ASSERT(DO_INDEX(ip));
if (jfs_ip->next_index < 2) {
jfs_warn("add_index: next_index = %d. Resetting!",
jfs_ip->next_index);
jfs_ip->next_index = 2;
}
index = jfs_ip->next_index++;
if (index <= MAX_INLINE_DIRTABLE_ENTRY) {
/*
* i_size reflects size of index table, or 8 bytes per entry.
*/
ip->i_size = (loff_t) (index - 1) << 3;
/*
* dir table fits inline within inode
*/
dirtab_slot = &jfs_ip->i_dirtable[index-2];
dirtab_slot->flag = DIR_INDEX_VALID;
dirtab_slot->slot = slot;
DTSaddress(dirtab_slot, bn);
set_cflag(COMMIT_Dirtable, ip);
return index;
}
if (index == (MAX_INLINE_DIRTABLE_ENTRY + 1)) {
struct dir_table_slot temp_table[12];
/*
* It's time to move the inline table to an external
* page and begin to build the xtree
*/
if (dquot_alloc_block(ip, sbi->nbperpage))
goto clean_up;
if (dbAlloc(ip, 0, sbi->nbperpage, &xaddr)) {
dquot_free_block(ip, sbi->nbperpage);
goto clean_up;
}
/*
* Save the table, we're going to overwrite it with the
* xtree root
*/
memcpy(temp_table, &jfs_ip->i_dirtable, sizeof(temp_table));
/*
* Initialize empty x-tree
*/
xtInitRoot(tid, ip);
/*
* Add the first block to the xtree
*/
if (xtInsert(tid, ip, 0, 0, sbi->nbperpage, &xaddr, 0)) {
/* This really shouldn't fail */
jfs_warn("add_index: xtInsert failed!");
memcpy(&jfs_ip->i_dirtable, temp_table,
sizeof (temp_table));
dbFree(ip, xaddr, sbi->nbperpage);
dquot_free_block(ip, sbi->nbperpage);
goto clean_up;
}
ip->i_size = PSIZE;
mp = get_index_page(ip, 0);
if (!mp) {
jfs_err("add_index: get_metapage failed!");
xtTruncate(tid, ip, 0, COMMIT_PWMAP);
memcpy(&jfs_ip->i_dirtable, temp_table,
sizeof (temp_table));
goto clean_up;
}
tlck = txLock(tid, ip, mp, tlckDATA);
llck = (struct linelock *) & tlck->lock;
ASSERT(llck->index == 0);
lv = &llck->lv[0];
lv->offset = 0;
lv->length = 6; /* tlckDATA slot size is 16 bytes */
llck->index++;
memcpy(mp->data, temp_table, sizeof(temp_table));
mark_metapage_dirty(mp);
release_metapage(mp);
/*
* Logging is now directed by xtree tlocks
*/
clear_cflag(COMMIT_Dirtable, ip);
}
offset = (index - 2) * sizeof(struct dir_table_slot);
page_offset = offset & (PSIZE - 1);
blkno = ((offset + 1) >> L2PSIZE) << sbi->l2nbperpage;
if (page_offset == 0) {
/*
* This will be the beginning of a new page
*/
xaddr = 0;
if (xtInsert(tid, ip, 0, blkno, sbi->nbperpage, &xaddr, 0)) {
jfs_warn("add_index: xtInsert failed!");
goto clean_up;
}
ip->i_size += PSIZE;
if ((mp = get_index_page(ip, blkno)))
memset(mp->data, 0, PSIZE); /* Just looks better */
else
xtTruncate(tid, ip, offset, COMMIT_PWMAP);
} else
mp = read_index_page(ip, blkno);
if (!mp) {
jfs_err("add_index: get/read_metapage failed!");
goto clean_up;
}
lock_index(tid, ip, mp, index);
dirtab_slot =
(struct dir_table_slot *) ((char *) mp->data + page_offset);
dirtab_slot->flag = DIR_INDEX_VALID;
dirtab_slot->slot = slot;
DTSaddress(dirtab_slot, bn);
mark_metapage_dirty(mp);
release_metapage(mp);
return index;
clean_up:
jfs_ip->next_index--;
return 0;
}
/*
* free_index()
*
* Marks an entry to the directory index table as free.
*/
static void free_index(tid_t tid, struct inode *ip, u32 index, u32 next)
{
struct dir_table_slot *dirtab_slot;
s64 lblock;
struct metapage *mp = NULL;
dirtab_slot = find_index(ip, index, &mp, &lblock);
if (!dirtab_slot)
return;
dirtab_slot->flag = DIR_INDEX_FREE;
dirtab_slot->slot = dirtab_slot->addr1 = 0;
dirtab_slot->addr2 = cpu_to_le32(next);
if (mp) {
lock_index(tid, ip, mp, index);
mark_metapage_dirty(mp);
release_metapage(mp);
} else
set_cflag(COMMIT_Dirtable, ip);
}
/*
* modify_index()
*
* Changes an entry in the directory index table
*/
static void modify_index(tid_t tid, struct inode *ip, u32 index, s64 bn,
int slot, struct metapage ** mp, s64 *lblock)
{
struct dir_table_slot *dirtab_slot;
dirtab_slot = find_index(ip, index, mp, lblock);
if (!dirtab_slot)
return;
DTSaddress(dirtab_slot, bn);
dirtab_slot->slot = slot;
if (*mp) {
lock_index(tid, ip, *mp, index);
mark_metapage_dirty(*mp);
} else
set_cflag(COMMIT_Dirtable, ip);
}
/*
* read_index()
*
* reads a directory table slot
*/
static int read_index(struct inode *ip, u32 index,
struct dir_table_slot * dirtab_slot)
{
s64 lblock;
struct metapage *mp = NULL;
struct dir_table_slot *slot;
slot = find_index(ip, index, &mp, &lblock);
if (!slot) {
return -EIO;
}
memcpy(dirtab_slot, slot, sizeof(struct dir_table_slot));
if (mp)
release_metapage(mp);
return 0;
}
/*
* dtSearch()
*
* function:
* Search for the entry with specified key
*
* parameter:
*
* return: 0 - search result on stack, leaf page pinned;
* errno - I/O error
*/
int dtSearch(struct inode *ip, struct component_name * key, ino_t * data,
struct btstack * btstack, int flag)
{
int rc = 0;
int cmp = 1; /* init for empty page */
s64 bn;
struct metapage *mp;
dtpage_t *p;
s8 *stbl;
int base, index, lim;
struct btframe *btsp;
pxd_t *pxd;
int psize = 288; /* initial in-line directory */
ino_t inumber;
struct component_name ciKey;
struct super_block *sb = ip->i_sb;
ciKey.name = kmalloc_array(JFS_NAME_MAX + 1, sizeof(wchar_t),
GFP_NOFS);
if (!ciKey.name) {
rc = -ENOMEM;
goto dtSearch_Exit2;
}
/* uppercase search key for c-i directory */
UniStrcpy(ciKey.name, key->name);
ciKey.namlen = key->namlen;
/* only uppercase if case-insensitive support is on */
if ((JFS_SBI(sb)->mntflag & JFS_OS2) == JFS_OS2) {
ciToUpper(&ciKey);
}
BT_CLR(btstack); /* reset stack */
/* init level count for max pages to split */
btstack->nsplit = 1;
/*
* search down tree from root:
*
* between two consecutive entries of <Ki, Pi> and <Kj, Pj> of
* internal page, child page Pi contains entry with k, Ki <= K < Kj.
*
* if entry with search key K is not found
* internal page search find the entry with largest key Ki
* less than K which point to the child page to search;
* leaf page search find the entry with smallest key Kj
* greater than K so that the returned index is the position of
* the entry to be shifted right for insertion of new entry.
* for empty tree, search key is greater than any key of the tree.
*
* by convention, root bn = 0.
*/
for (bn = 0;;) {
/* get/pin the page to search */
DT_GETPAGE(ip, bn, mp, psize, p, rc);
if (rc)
goto dtSearch_Exit1;
/* get sorted entry table of the page */
stbl = DT_GETSTBL(p);
/*
* binary search with search key K on the current page.
*/
for (base = 0, lim = p->header.nextindex; lim; lim >>= 1) {
index = base + (lim >> 1);
if (stbl[index] < 0) {
rc = -EIO;
goto out;
}
if (p->header.flag & BT_LEAF) {
/* uppercase leaf name to compare */
cmp =
ciCompare(&ciKey, p, stbl[index],
JFS_SBI(sb)->mntflag);
} else {
/* router key is in uppercase */
cmp = dtCompare(&ciKey, p, stbl[index]);
}
if (cmp == 0) {
/*
* search hit
*/
/* search hit - leaf page:
* return the entry found
*/
if (p->header.flag & BT_LEAF) {
inumber = le32_to_cpu(
((struct ldtentry *) & p->slot[stbl[index]])->inumber);
/*
* search for JFS_LOOKUP
*/
if (flag == JFS_LOOKUP) {
*data = inumber;
rc = 0;
goto out;
}
/*
* search for JFS_CREATE
*/
if (flag == JFS_CREATE) {
*data = inumber;
rc = -EEXIST;
goto out;
}
/*
* search for JFS_REMOVE or JFS_RENAME
*/
if ((flag == JFS_REMOVE ||
flag == JFS_RENAME) &&
*data != inumber) {
rc = -ESTALE;
goto out;
}
/*
* JFS_REMOVE|JFS_FINDDIR|JFS_RENAME
*/
/* save search result */
*data = inumber;
btsp = btstack->top;
btsp->bn = bn;
btsp->index = index;
btsp->mp = mp;
rc = 0;
goto dtSearch_Exit1;
}
/* search hit - internal page:
* descend/search its child page
*/
goto getChild;
}
if (cmp > 0) {
base = index + 1;
--lim;
}
}
/*
* search miss
*
* base is the smallest index with key (Kj) greater than
* search key (K) and may be zero or (maxindex + 1) index.
*/
/*
* search miss - leaf page
*
* return location of entry (base) where new entry with
* search key K is to be inserted.
*/
if (p->header.flag & BT_LEAF) {
/*
* search for JFS_LOOKUP, JFS_REMOVE, or JFS_RENAME
*/
if (flag == JFS_LOOKUP || flag == JFS_REMOVE ||
flag == JFS_RENAME) {
rc = -ENOENT;
goto out;
}
/*
* search for JFS_CREATE|JFS_FINDDIR:
*
* save search result
*/
*data = 0;
btsp = btstack->top;
btsp->bn = bn;
btsp->index = base;
btsp->mp = mp;
rc = 0;
goto dtSearch_Exit1;
}
/*
* search miss - internal page
*
* if base is non-zero, decrement base by one to get the parent
* entry of the child page to search.
*/
index = base ? base - 1 : base;
/*
* go down to child page
*/
getChild:
/* update max. number of pages to split */
if (BT_STACK_FULL(btstack)) {
/* Something's corrupted, mark filesystem dirty so
* chkdsk will fix it.
*/
jfs_error(sb, "stack overrun!\n");
BT_STACK_DUMP(btstack);
rc = -EIO;
goto out;
}
btstack->nsplit++;
/* push (bn, index) of the parent page/entry */
BT_PUSH(btstack, bn, index);
/* get the child page block number */
pxd = (pxd_t *) & p->slot[stbl[index]];
bn = addressPXD(pxd);
psize = lengthPXD(pxd) << JFS_SBI(ip->i_sb)->l2bsize;
/* unpin the parent page */
DT_PUTPAGE(mp);
}
out:
DT_PUTPAGE(mp);
dtSearch_Exit1:
kfree(ciKey.name);
dtSearch_Exit2:
return rc;
}
/*
* dtInsert()
*
* function: insert an entry to directory tree
*
* parameter:
*
* return: 0 - success;
* errno - failure;
*/
int dtInsert(tid_t tid, struct inode *ip,
struct component_name * name, ino_t * fsn, struct btstack * btstack)
{
int rc = 0;
struct metapage *mp; /* meta-page buffer */
dtpage_t *p; /* base B+-tree index page */
s64 bn;
int index;
struct dtsplit split; /* split information */
ddata_t data;
struct dt_lock *dtlck;
int n;
struct tlock *tlck;
struct lv *lv;
/*
* retrieve search result
*
* dtSearch() returns (leaf page pinned, index at which to insert).
* n.b. dtSearch() may return index of (maxindex + 1) of
* the full page.
*/
DT_GETSEARCH(ip, btstack->top, bn, mp, p, index);
if (p->header.freelist == 0)
return -EINVAL;
/*
* insert entry for new key
*/
if (DO_INDEX(ip)) {
if (JFS_IP(ip)->next_index == DIREND) {
DT_PUTPAGE(mp);
return -EMLINK;
}
n = NDTLEAF(name->namlen);
data.leaf.tid = tid;
data.leaf.ip = ip;
} else {
n = NDTLEAF_LEGACY(name->namlen);
data.leaf.ip = NULL; /* signifies legacy directory format */
}
data.leaf.ino = *fsn;
/*
* leaf page does not have enough room for new entry:
*
* extend/split the leaf page;
*
* dtSplitUp() will insert the entry and unpin the leaf page.
*/
if (n > p->header.freecnt) {
split.mp = mp;
split.index = index;
split.nslot = n;
split.key = name;
split.data = &data;
rc = dtSplitUp(tid, ip, &split, btstack);
return rc;
}
/*
* leaf page does have enough room for new entry:
*
* insert the new data entry into the leaf page;
*/
BT_MARK_DIRTY(mp, ip);
/*
* acquire a transaction lock on the leaf page
*/
tlck = txLock(tid, ip, mp, tlckDTREE | tlckENTRY);
dtlck = (struct dt_lock *) & tlck->lock;
ASSERT(dtlck->index == 0);
lv = & dtlck->lv[0];
/* linelock header */
lv->offset = 0;
lv->length = 1;
dtlck->index++;
dtInsertEntry(p, index, name, &data, &dtlck);
/* linelock stbl of non-root leaf page */
if (!(p->header.flag & BT_ROOT)) {
if (dtlck->index >= dtlck->maxcnt)
dtlck = (struct dt_lock *) txLinelock(dtlck);
lv = & dtlck->lv[dtlck->index];
n = index >> L2DTSLOTSIZE;
lv->offset = p->header.stblindex + n;
lv->length =
((p->header.nextindex - 1) >> L2DTSLOTSIZE) - n + 1;
dtlck->index++;
}
/* unpin the leaf page */
DT_PUTPAGE(mp);
return 0;
}
/*
* dtSplitUp()
*
* function: propagate insertion bottom up;
*
* parameter:
*
* return: 0 - success;
* errno - failure;
* leaf page unpinned;
*/
static int dtSplitUp(tid_t tid,
struct inode *ip, struct dtsplit * split, struct btstack * btstack)
{
struct jfs_sb_info *sbi = JFS_SBI(ip->i_sb);
int rc = 0;
struct metapage *smp;
dtpage_t *sp; /* split page */
struct metapage *rmp;
dtpage_t *rp; /* new right page split from sp */
pxd_t rpxd; /* new right page extent descriptor */
struct metapage *lmp;
dtpage_t *lp; /* left child page */
int skip; /* index of entry of insertion */
struct btframe *parent; /* parent page entry on traverse stack */
s64 xaddr, nxaddr;
int xlen, xsize;
struct pxdlist pxdlist;
pxd_t *pxd;
struct component_name key = { 0, NULL };
ddata_t *data = split->data;
int n;
struct dt_lock *dtlck;
struct tlock *tlck;
struct lv *lv;
int quota_allocation = 0;
/* get split page */
smp = split->mp;
sp = DT_PAGE(ip, smp);
key.name = kmalloc_array(JFS_NAME_MAX + 2, sizeof(wchar_t), GFP_NOFS);
if (!key.name) {
DT_PUTPAGE(smp);
rc = -ENOMEM;
goto dtSplitUp_Exit;
}
/*
* split leaf page
*
* The split routines insert the new entry, and
* acquire txLock as appropriate.
*/
/*
* split root leaf page:
*/
if (sp->header.flag & BT_ROOT) {
/*
* allocate a single extent child page
*/
xlen = 1;
n = sbi->bsize >> L2DTSLOTSIZE;
n -= (n + 31) >> L2DTSLOTSIZE; /* stbl size */
n -= DTROOTMAXSLOT - sp->header.freecnt; /* header + entries */
if (n <= split->nslot)
xlen++;
if ((rc = dbAlloc(ip, 0, (s64) xlen, &xaddr))) {
DT_PUTPAGE(smp);
goto freeKeyName;
}
pxdlist.maxnpxd = 1;
pxdlist.npxd = 0;
pxd = &pxdlist.pxd[0];
PXDaddress(pxd, xaddr);
PXDlength(pxd, xlen);
split->pxdlist = &pxdlist;
rc = dtSplitRoot(tid, ip, split, &rmp);
if (rc)
dbFree(ip, xaddr, xlen);
else
DT_PUTPAGE(rmp);
DT_PUTPAGE(smp);
if (!DO_INDEX(ip))
ip->i_size = xlen << sbi->l2bsize;
goto freeKeyName;
}
/*
* extend first leaf page
*
* extend the 1st extent if less than buffer page size
* (dtExtendPage() reurns leaf page unpinned)
*/
pxd = &sp->header.self;
xlen = lengthPXD(pxd);
xsize = xlen << sbi->l2bsize;
if (xsize < PSIZE) {
xaddr = addressPXD(pxd);
n = xsize >> L2DTSLOTSIZE;
n -= (n + 31) >> L2DTSLOTSIZE; /* stbl size */
if ((n + sp->header.freecnt) <= split->nslot)
n = xlen + (xlen << 1);
else
n = xlen;
/* Allocate blocks to quota. */
rc = dquot_alloc_block(ip, n);
if (rc)
goto extendOut;
quota_allocation += n;
if ((rc = dbReAlloc(sbi->ipbmap, xaddr, (s64) xlen,
(s64) n, &nxaddr)))
goto extendOut;
pxdlist.maxnpxd = 1;
pxdlist.npxd = 0;
pxd = &pxdlist.pxd[0];
PXDaddress(pxd, nxaddr);
PXDlength(pxd, xlen + n);
split->pxdlist = &pxdlist;
if ((rc = dtExtendPage(tid, ip, split, btstack))) {
nxaddr = addressPXD(pxd);
if (xaddr != nxaddr) {
/* free relocated extent */
xlen = lengthPXD(pxd);
dbFree(ip, nxaddr, (s64) xlen);
} else {
/* free extended delta */
xlen = lengthPXD(pxd) - n;
xaddr = addressPXD(pxd) + xlen;
dbFree(ip, xaddr, (s64) n);
}
} else if (!DO_INDEX(ip))
ip->i_size = lengthPXD(pxd) << sbi->l2bsize;
extendOut:
DT_PUTPAGE(smp);
goto freeKeyName;
}
/*
* split leaf page <sp> into <sp> and a new right page <rp>.
*
* return <rp> pinned and its extent descriptor <rpxd>
*/
/*
* allocate new directory page extent and
* new index page(s) to cover page split(s)
*
* allocation hint: ?
*/
n = btstack->nsplit;
pxdlist.maxnpxd = pxdlist.npxd = 0;
xlen = sbi->nbperpage;
for (pxd = pxdlist.pxd; n > 0; n--, pxd++) {
if ((rc = dbAlloc(ip, 0, (s64) xlen, &xaddr)) == 0) {
PXDaddress(pxd, xaddr);
PXDlength(pxd, xlen);
pxdlist.maxnpxd++;
continue;
}
DT_PUTPAGE(smp);
/* undo allocation */
goto splitOut;
}
split->pxdlist = &pxdlist;
if ((rc = dtSplitPage(tid, ip, split, &rmp, &rp, &rpxd))) {
DT_PUTPAGE(smp);
/* undo allocation */
goto splitOut;
}
if (!DO_INDEX(ip))
ip->i_size += PSIZE;
/*
* propagate up the router entry for the leaf page just split
*
* insert a router entry for the new page into the parent page,
* propagate the insert/split up the tree by walking back the stack
* of (bn of parent page, index of child page entry in parent page)
* that were traversed during the search for the page that split.
*
* the propagation of insert/split up the tree stops if the root
* splits or the page inserted into doesn't have to split to hold
* the new entry.
*
* the parent entry for the split page remains the same, and
* a new entry is inserted at its right with the first key and
* block number of the new right page.
*
* There are a maximum of 4 pages pinned at any time:
* two children, left parent and right parent (when the parent splits).
* keep the child pages pinned while working on the parent.
* make sure that all pins are released at exit.
*/
while ((parent = BT_POP(btstack)) != NULL) {
/* parent page specified by stack frame <parent> */
/* keep current child pages (<lp>, <rp>) pinned */
lmp = smp;
lp = sp;
/*
* insert router entry in parent for new right child page <rp>
*/
/* get the parent page <sp> */
DT_GETPAGE(ip, parent->bn, smp, PSIZE, sp, rc);
if (rc) {
DT_PUTPAGE(lmp);
DT_PUTPAGE(rmp);
goto splitOut;
}
/*
* The new key entry goes ONE AFTER the index of parent entry,
* because the split was to the right.
*/
skip = parent->index + 1;
/*
* compute the key for the router entry
*
* key suffix compression:
* for internal pages that have leaf pages as children,
* retain only what's needed to distinguish between
* the new entry and the entry on the page to its left.
* If the keys compare equal, retain the entire key.
*
* note that compression is performed only at computing
* router key at the lowest internal level.
* further compression of the key between pairs of higher
* level internal pages loses too much information and
* the search may fail.
* (e.g., two adjacent leaf pages of {a, ..., x} {xx, ...,}
* results in two adjacent parent entries (a)(xx).
* if split occurs between these two entries, and
* if compression is applied, the router key of parent entry
* of right page (x) will divert search for x into right
* subtree and miss x in the left subtree.)
*
* the entire key must be retained for the next-to-leftmost
* internal key at any level of the tree, or search may fail
* (e.g., ?)
*/
switch (rp->header.flag & BT_TYPE) {
case BT_LEAF:
/*
* compute the length of prefix for suffix compression
* between last entry of left page and first entry
* of right page
*/
if ((sp->header.flag & BT_ROOT && skip > 1) ||
sp->header.prev != 0 || skip > 1) {
/* compute uppercase router prefix key */
rc = ciGetLeafPrefixKey(lp,
lp->header.nextindex-1,
rp, 0, &key,
sbi->mntflag);
if (rc) {
DT_PUTPAGE(lmp);
DT_PUTPAGE(rmp);
DT_PUTPAGE(smp);
goto splitOut;
}
} else {
/* next to leftmost entry of
lowest internal level */
/* compute uppercase router key */
dtGetKey(rp, 0, &key, sbi->mntflag);
key.name[key.namlen] = 0;
if ((sbi->mntflag & JFS_OS2) == JFS_OS2)
ciToUpper(&key);
}
n = NDTINTERNAL(key.namlen);
break;
case BT_INTERNAL:
dtGetKey(rp, 0, &key, sbi->mntflag);
n = NDTINTERNAL(key.namlen);
break;
default:
jfs_err("dtSplitUp(): UFO!");
break;
}
/* unpin left child page */
DT_PUTPAGE(lmp);
/*
* compute the data for the router entry
*/
data->xd = rpxd; /* child page xd */
/*
* parent page is full - split the parent page
*/
if (n > sp->header.freecnt) {
/* init for parent page split */
split->mp = smp;
split->index = skip; /* index at insert */
split->nslot = n;
split->key = &key;
/* split->data = data; */
/* unpin right child page */
DT_PUTPAGE(rmp);
/* The split routines insert the new entry,
* acquire txLock as appropriate.
* return <rp> pinned and its block number <rbn>.
*/
rc = (sp->header.flag & BT_ROOT) ?
dtSplitRoot(tid, ip, split, &rmp) :
dtSplitPage(tid, ip, split, &rmp, &rp, &rpxd);
if (rc) {
DT_PUTPAGE(smp);
goto splitOut;
}
/* smp and rmp are pinned */
}
/*
* parent page is not full - insert router entry in parent page
*/
else {
BT_MARK_DIRTY(smp, ip);
/*
* acquire a transaction lock on the parent page
*/
tlck = txLock(tid, ip, smp, tlckDTREE | tlckENTRY);
dtlck = (struct dt_lock *) & tlck->lock;
ASSERT(dtlck->index == 0);
lv = & dtlck->lv[0];
/* linelock header */
lv->offset = 0;
lv->length = 1;
dtlck->index++;
/* linelock stbl of non-root parent page */
if (!(sp->header.flag & BT_ROOT)) {
lv++;
n = skip >> L2DTSLOTSIZE;
lv->offset = sp->header.stblindex + n;
lv->length =
((sp->header.nextindex -
1) >> L2DTSLOTSIZE) - n + 1;
dtlck->index++;
}
dtInsertEntry(sp, skip, &key, data, &dtlck);
/* exit propagate up */
break;
}
}
/* unpin current split and its right page */
DT_PUTPAGE(smp);
DT_PUTPAGE(rmp);
/*
* free remaining extents allocated for split
*/
splitOut:
n = pxdlist.npxd;
pxd = &pxdlist.pxd[n];
for (; n < pxdlist.maxnpxd; n++, pxd++)
dbFree(ip, addressPXD(pxd), (s64) lengthPXD(pxd));
freeKeyName:
kfree(key.name);
/* Rollback quota allocation */
if (rc && quota_allocation)
dquot_free_block(ip, quota_allocation);
dtSplitUp_Exit:
return rc;
}
/*
* dtSplitPage()
*
* function: Split a non-root page of a btree.
*
* parameter:
*
* return: 0 - success;
* errno - failure;
* return split and new page pinned;
*/
static int dtSplitPage(tid_t tid, struct inode *ip, struct dtsplit * split,
struct metapage ** rmpp, dtpage_t ** rpp, pxd_t * rpxdp)
{
int rc = 0;
struct metapage *smp;
dtpage_t *sp;
struct metapage *rmp;
dtpage_t *rp; /* new right page allocated */
s64 rbn; /* new right page block number */
struct metapage *mp;
dtpage_t *p;
s64 nextbn;
struct pxdlist *pxdlist;
pxd_t *pxd;
int skip, nextindex, half, left, nxt, off, si;
struct ldtentry *ldtentry;
struct idtentry *idtentry;
u8 *stbl;
struct dtslot *f;
int fsi, stblsize;
int n;
struct dt_lock *sdtlck, *rdtlck;
struct tlock *tlck;
struct dt_lock *dtlck;
struct lv *slv, *rlv, *lv;
/* get split page */
smp = split->mp;
sp = DT_PAGE(ip, smp);
/*
* allocate the new right page for the split
*/
pxdlist = split->pxdlist;
pxd = &pxdlist->pxd[pxdlist->npxd];
pxdlist->npxd++;
rbn = addressPXD(pxd);
rmp = get_metapage(ip, rbn, PSIZE, 1);
if (rmp == NULL)
return -EIO;
/* Allocate blocks to quota. */
rc = dquot_alloc_block(ip, lengthPXD(pxd));
if (rc) {
release_metapage(rmp);
return rc;
}
jfs_info("dtSplitPage: ip:0x%p smp:0x%p rmp:0x%p", ip, smp, rmp);
BT_MARK_DIRTY(rmp, ip);
/*
* acquire a transaction lock on the new right page
*/
tlck = txLock(tid, ip, rmp, tlckDTREE | tlckNEW);
rdtlck = (struct dt_lock *) & tlck->lock;
rp = (dtpage_t *) rmp->data;
*rpp = rp;
rp->header.self = *pxd;
BT_MARK_DIRTY(smp, ip);
/*
* acquire a transaction lock on the split page
*
* action:
*/
tlck = txLock(tid, ip, smp, tlckDTREE | tlckENTRY);
sdtlck = (struct dt_lock *) & tlck->lock;
/* linelock header of split page */
ASSERT(sdtlck->index == 0);
slv = & sdtlck->lv[0];
slv->offset = 0;
slv->length = 1;
sdtlck->index++;
/*
* initialize/update sibling pointers between sp and rp
*/
nextbn = le64_to_cpu(sp->header.next);
rp->header.next = cpu_to_le64(nextbn);
rp->header.prev = cpu_to_le64(addressPXD(&sp->header.self));
sp->header.next = cpu_to_le64(rbn);
/*
* initialize new right page
*/
rp->header.flag = sp->header.flag;
/* compute sorted entry table at start of extent data area */
rp->header.nextindex = 0;
rp->header.stblindex = 1;
n = PSIZE >> L2DTSLOTSIZE;
rp->header.maxslot = n;
stblsize = (n + 31) >> L2DTSLOTSIZE; /* in unit of slot */
/* init freelist */
fsi = rp->header.stblindex + stblsize;
rp->header.freelist = fsi;
rp->header.freecnt = rp->header.maxslot - fsi;
/*
* sequential append at tail: append without split
*
* If splitting the last page on a level because of appending
* a entry to it (skip is maxentry), it's likely that the access is
* sequential. Adding an empty page on the side of the level is less
* work and can push the fill factor much higher than normal.
* If we're wrong it's no big deal, we'll just do the split the right
* way next time.
* (It may look like it's equally easy to do a similar hack for
* reverse sorted data, that is, split the tree left,
* but it's not. Be my guest.)
*/
if (nextbn == 0 && split->index == sp->header.nextindex) {
/* linelock header + stbl (first slot) of new page */
rlv = & rdtlck->lv[rdtlck->index];
rlv->offset = 0;
rlv->length = 2;
rdtlck->index++;
/*
* initialize freelist of new right page
*/
f = &rp->slot[fsi];
for (fsi++; fsi < rp->header.maxslot; f++, fsi++)
f->next = fsi;
f->next = -1;
/* insert entry at the first entry of the new right page */
dtInsertEntry(rp, 0, split->key, split->data, &rdtlck);
goto out;
}
/*
* non-sequential insert (at possibly middle page)
*/
/*
* update prev pointer of previous right sibling page;
*/
if (nextbn != 0) {
DT_GETPAGE(ip, nextbn, mp, PSIZE, p, rc);
if (rc) {
discard_metapage(rmp);
return rc;
}
BT_MARK_DIRTY(mp, ip);
/*
* acquire a transaction lock on the next page
*/
tlck = txLock(tid, ip, mp, tlckDTREE | tlckRELINK);
jfs_info("dtSplitPage: tlck = 0x%p, ip = 0x%p, mp=0x%p",
tlck, ip, mp);
dtlck = (struct dt_lock *) & tlck->lock;
/* linelock header of previous right sibling page */
lv = & dtlck->lv[dtlck->index];
lv->offset = 0;
lv->length = 1;
dtlck->index++;
p->header.prev = cpu_to_le64(rbn);
DT_PUTPAGE(mp);
}
/*
* split the data between the split and right pages.
*/
skip = split->index;
half = (PSIZE >> L2DTSLOTSIZE) >> 1; /* swag */
left = 0;
/*
* compute fill factor for split pages
*
* <nxt> traces the next entry to move to rp
* <off> traces the next entry to stay in sp
*/
stbl = (u8 *) & sp->slot[sp->header.stblindex];
nextindex = sp->header.nextindex;
for (nxt = off = 0; nxt < nextindex; ++off) {
if (off == skip)
/* check for fill factor with new entry size */
n = split->nslot;
else {
si = stbl[nxt];
switch (sp->header.flag & BT_TYPE) {
case BT_LEAF:
ldtentry = (struct ldtentry *) & sp->slot[si];
if (DO_INDEX(ip))
n = NDTLEAF(ldtentry->namlen);
else
n = NDTLEAF_LEGACY(ldtentry->
namlen);
break;
case BT_INTERNAL:
idtentry = (struct idtentry *) & sp->slot[si];
n = NDTINTERNAL(idtentry->namlen);
break;
default:
break;
}
++nxt; /* advance to next entry to move in sp */
}
left += n;
if (left >= half)
break;
}
/* <nxt> poins to the 1st entry to move */
/*
* move entries to right page
*
* dtMoveEntry() initializes rp and reserves entry for insertion
*
* split page moved out entries are linelocked;
* new/right page moved in entries are linelocked;
*/
/* linelock header + stbl of new right page */
rlv = & rdtlck->lv[rdtlck->index];
rlv->offset = 0;
rlv->length = 5;
rdtlck->index++;
dtMoveEntry(sp, nxt, rp, &sdtlck, &rdtlck, DO_INDEX(ip));
sp->header.nextindex = nxt;
/*
* finalize freelist of new right page
*/
fsi = rp->header.freelist;
f = &rp->slot[fsi];
for (fsi++; fsi < rp->header.maxslot; f++, fsi++)
f->next = fsi;
f->next = -1;
/*
* Update directory index table for entries now in right page
*/
if ((rp->header.flag & BT_LEAF) && DO_INDEX(ip)) {
s64 lblock;
mp = NULL;
stbl = DT_GETSTBL(rp);
for (n = 0; n < rp->header.nextindex; n++) {
ldtentry = (struct ldtentry *) & rp->slot[stbl[n]];
modify_index(tid, ip, le32_to_cpu(ldtentry->index),
rbn, n, &mp, &lblock);
}
if (mp)
release_metapage(mp);
}
/*
* the skipped index was on the left page,
*/
if (skip <= off) {
/* insert the new entry in the split page */
dtInsertEntry(sp, skip, split->key, split->data, &sdtlck);
/* linelock stbl of split page */
if (sdtlck->index >= sdtlck->maxcnt)
sdtlck = (struct dt_lock *) txLinelock(sdtlck);
slv = & sdtlck->lv[sdtlck->index];
n = skip >> L2DTSLOTSIZE;
slv->offset = sp->header.stblindex + n;
slv->length =
((sp->header.nextindex - 1) >> L2DTSLOTSIZE) - n + 1;
sdtlck->index++;
}
/*
* the skipped index was on the right page,
*/
else {
/* adjust the skip index to reflect the new position */
skip -= nxt;
/* insert the new entry in the right page */
dtInsertEntry(rp, skip, split->key, split->data, &rdtlck);
}
out:
*rmpp = rmp;
*rpxdp = *pxd;
return rc;
}
/*
* dtExtendPage()
*
* function: extend 1st/only directory leaf page
*
* parameter:
*
* return: 0 - success;
* errno - failure;
* return extended page pinned;
*/
static int dtExtendPage(tid_t tid,
struct inode *ip, struct dtsplit * split, struct btstack * btstack)
{
struct super_block *sb = ip->i_sb;
int rc;
struct metapage *smp, *pmp, *mp;
dtpage_t *sp, *pp;
struct pxdlist *pxdlist;
pxd_t *pxd, *tpxd;
int xlen, xsize;
int newstblindex, newstblsize;
int oldstblindex, oldstblsize;
int fsi, last;
struct dtslot *f;
struct btframe *parent;
int n;
struct dt_lock *dtlck;
s64 xaddr, txaddr;
struct tlock *tlck;
struct pxd_lock *pxdlock;
struct lv *lv;
uint type;
struct ldtentry *ldtentry;
u8 *stbl;
/* get page to extend */
smp = split->mp;
sp = DT_PAGE(ip, smp);
/* get parent/root page */
parent = BT_POP(btstack);
DT_GETPAGE(ip, parent->bn, pmp, PSIZE, pp, rc);
if (rc)
return (rc);
/*
* extend the extent
*/
pxdlist = split->pxdlist;
pxd = &pxdlist->pxd[pxdlist->npxd];
pxdlist->npxd++;
xaddr = addressPXD(pxd);
tpxd = &sp->header.self;
txaddr = addressPXD(tpxd);
/* in-place extension */
if (xaddr == txaddr) {
type = tlckEXTEND;
}
/* relocation */
else {
type = tlckNEW;
/* save moved extent descriptor for later free */
tlck = txMaplock(tid, ip, tlckDTREE | tlckRELOCATE);
pxdlock = (struct pxd_lock *) & tlck->lock;
pxdlock->flag = mlckFREEPXD;
pxdlock->pxd = sp->header.self;
pxdlock->index = 1;
/*
* Update directory index table to reflect new page address
*/
if (DO_INDEX(ip)) {
s64 lblock;
mp = NULL;
stbl = DT_GETSTBL(sp);
for (n = 0; n < sp->header.nextindex; n++) {
ldtentry =
(struct ldtentry *) & sp->slot[stbl[n]];
modify_index(tid, ip,
le32_to_cpu(ldtentry->index),
xaddr, n, &mp, &lblock);
}
if (mp)
release_metapage(mp);
}
}
/*
* extend the page
*/
sp->header.self = *pxd;
jfs_info("dtExtendPage: ip:0x%p smp:0x%p sp:0x%p", ip, smp, sp);
BT_MARK_DIRTY(smp, ip);
/*
* acquire a transaction lock on the extended/leaf page
*/
tlck = txLock(tid, ip, smp, tlckDTREE | type);
dtlck = (struct dt_lock *) & tlck->lock;
lv = & dtlck->lv[0];
/* update buffer extent descriptor of extended page */
xlen = lengthPXD(pxd);
xsize = xlen << JFS_SBI(sb)->l2bsize;
/*
* copy old stbl to new stbl at start of extended area
*/
oldstblindex = sp->header.stblindex;
oldstblsize = (sp->header.maxslot + 31) >> L2DTSLOTSIZE;
newstblindex = sp->header.maxslot;
n = xsize >> L2DTSLOTSIZE;
newstblsize = (n + 31) >> L2DTSLOTSIZE;
memcpy(&sp->slot[newstblindex], &sp->slot[oldstblindex],
sp->header.nextindex);
/*
* in-line extension: linelock old area of extended page
*/
if (type == tlckEXTEND) {
/* linelock header */
lv->offset = 0;
lv->length = 1;
dtlck->index++;
lv++;
/* linelock new stbl of extended page */
lv->offset = newstblindex;
lv->length = newstblsize;
}
/*
* relocation: linelock whole relocated area
*/
else {
lv->offset = 0;
lv->length = sp->header.maxslot + newstblsize;
}
dtlck->index++;
sp->header.maxslot = n;
sp->header.stblindex = newstblindex;
/* sp->header.nextindex remains the same */
/*
* add old stbl region at head of freelist
*/
fsi = oldstblindex;
f = &sp->slot[fsi];
last = sp->header.freelist;
for (n = 0; n < oldstblsize; n++, fsi++, f++) {
f->next = last;
last = fsi;
}
sp->header.freelist = last;
sp->header.freecnt += oldstblsize;
/*
* append free region of newly extended area at tail of freelist
*/
/* init free region of newly extended area */
fsi = n = newstblindex + newstblsize;
f = &sp->slot[fsi];
for (fsi++; fsi < sp->header.maxslot; f++, fsi++)
f->next = fsi;
f->next = -1;
/* append new free region at tail of old freelist */
fsi = sp->header.freelist;
if (fsi == -1)
sp->header.freelist = n;
else {
do {
f = &sp->slot[fsi];
fsi = f->next;
} while (fsi != -1);
f->next = n;
}
sp->header.freecnt += sp->header.maxslot - n;
/*
* insert the new entry
*/
dtInsertEntry(sp, split->index, split->key, split->data, &dtlck);
BT_MARK_DIRTY(pmp, ip);
/*
* linelock any freeslots residing in old extent
*/
if (type == tlckEXTEND) {
n = sp->header.maxslot >> 2;
if (sp->header.freelist < n)
dtLinelockFreelist(sp, n, &dtlck);
}
/*
* update parent entry on the parent/root page
*/
/*
* acquire a transaction lock on the parent/root page
*/
tlck = txLock(tid, ip, pmp, tlckDTREE | tlckENTRY);
dtlck = (struct dt_lock *) & tlck->lock;
lv = & dtlck->lv[dtlck->index];
/* linelock parent entry - 1st slot */
lv->offset = 1;
lv->length = 1;
dtlck->index++;
/* update the parent pxd for page extension */
tpxd = (pxd_t *) & pp->slot[1];
*tpxd = *pxd;
DT_PUTPAGE(pmp);
return 0;
}
/*
* dtSplitRoot()
*
* function:
* split the full root page into
* original/root/split page and new right page
* i.e., root remains fixed in tree anchor (inode) and
* the root is copied to a single new right child page
* since root page << non-root page, and
* the split root page contains a single entry for the
* new right child page.
*
* parameter:
*
* return: 0 - success;
* errno - failure;
* return new page pinned;
*/
static int dtSplitRoot(tid_t tid,
struct inode *ip, struct dtsplit * split, struct metapage ** rmpp)
{
struct super_block *sb = ip->i_sb;
struct metapage *smp;
dtroot_t *sp;
struct metapage *rmp;
dtpage_t *rp;
s64 rbn;
int xlen;
int xsize;
struct dtslot *f;
s8 *stbl;
int fsi, stblsize, n;
struct idtentry *s;
pxd_t *ppxd;
struct pxdlist *pxdlist;
pxd_t *pxd;
struct dt_lock *dtlck;
struct tlock *tlck;
struct lv *lv;
int rc;
/* get split root page */
smp = split->mp;
sp = &JFS_IP(ip)->i_dtroot;
/*
* allocate/initialize a single (right) child page
*
* N.B. at first split, a one (or two) block to fit new entry
* is allocated; at subsequent split, a full page is allocated;
*/
pxdlist = split->pxdlist;
pxd = &pxdlist->pxd[pxdlist->npxd];
pxdlist->npxd++;
rbn = addressPXD(pxd);
xlen = lengthPXD(pxd);
xsize = xlen << JFS_SBI(sb)->l2bsize;
rmp = get_metapage(ip, rbn, xsize, 1);
if (!rmp)
return -EIO;
rp = rmp->data;
/* Allocate blocks to quota. */
rc = dquot_alloc_block(ip, lengthPXD(pxd));
if (rc) {
release_metapage(rmp);
return rc;
}
BT_MARK_DIRTY(rmp, ip);
/*
* acquire a transaction lock on the new right page
*/
tlck = txLock(tid, ip, rmp, tlckDTREE | tlckNEW);
dtlck = (struct dt_lock *) & tlck->lock;
rp->header.flag =
(sp->header.flag & BT_LEAF) ? BT_LEAF : BT_INTERNAL;
rp->header.self = *pxd;
/* initialize sibling pointers */
rp->header.next = 0;
rp->header.prev = 0;
/*
* move in-line root page into new right page extent
*/
/* linelock header + copied entries + new stbl (1st slot) in new page */
ASSERT(dtlck->index == 0);
lv = & dtlck->lv[0];
lv->offset = 0;
lv->length = 10; /* 1 + 8 + 1 */
dtlck->index++;
n = xsize >> L2DTSLOTSIZE;
rp->header.maxslot = n;
stblsize = (n + 31) >> L2DTSLOTSIZE;
/* copy old stbl to new stbl at start of extended area */
rp->header.stblindex = DTROOTMAXSLOT;
stbl = (s8 *) & rp->slot[DTROOTMAXSLOT];
memcpy(stbl, sp->header.stbl, sp->header.nextindex);
rp->header.nextindex = sp->header.nextindex;
/* copy old data area to start of new data area */
memcpy(&rp->slot[1], &sp->slot[1], IDATASIZE);
/*
* append free region of newly extended area at tail of freelist
*/
/* init free region of newly extended area */
fsi = n = DTROOTMAXSLOT + stblsize;
f = &rp->slot[fsi];
for (fsi++; fsi < rp->header.maxslot; f++, fsi++)
f->next = fsi;
f->next = -1;
/* append new free region at tail of old freelist */
fsi = sp->header.freelist;
if (fsi == -1)
rp->header.freelist = n;
else {
rp->header.freelist = fsi;
do {
f = &rp->slot[fsi];
fsi = f->next;
} while (fsi >= 0);
f->next = n;
}
rp->header.freecnt = sp->header.freecnt + rp->header.maxslot - n;
/*
* Update directory index table for entries now in right page
*/
if ((rp->header.flag & BT_LEAF) && DO_INDEX(ip)) {
s64 lblock;
struct metapage *mp = NULL;
struct ldtentry *ldtentry;
stbl = DT_GETSTBL(rp);
for (n = 0; n < rp->header.nextindex; n++) {
ldtentry = (struct ldtentry *) & rp->slot[stbl[n]];
modify_index(tid, ip, le32_to_cpu(ldtentry->index),
rbn, n, &mp, &lblock);
}
if (mp)
release_metapage(mp);
}
/*
* insert the new entry into the new right/child page
* (skip index in the new right page will not change)
*/
dtInsertEntry(rp, split->index, split->key, split->data, &dtlck);
/*
* reset parent/root page
*
* set the 1st entry offset to 0, which force the left-most key
* at any level of the tree to be less than any search key.
*
* The btree comparison code guarantees that the left-most key on any
* level of the tree is never used, so it doesn't need to be filled in.
*/
BT_MARK_DIRTY(smp, ip);
/*
* acquire a transaction lock on the root page (in-memory inode)
*/
tlck = txLock(tid, ip, smp, tlckDTREE | tlckNEW | tlckBTROOT);
dtlck = (struct dt_lock *) & tlck->lock;
/* linelock root */
ASSERT(dtlck->index == 0);
lv = & dtlck->lv[0];
lv->offset = 0;
lv->length = DTROOTMAXSLOT;
dtlck->index++;
/* update page header of root */
if (sp->header.flag & BT_LEAF) {
sp->header.flag &= ~BT_LEAF;
sp->header.flag |= BT_INTERNAL;
}
/* init the first entry */
s = (struct idtentry *) & sp->slot[DTENTRYSTART];
ppxd = (pxd_t *) s;
*ppxd = *pxd;
s->next = -1;
s->namlen = 0;
stbl = sp->header.stbl;
stbl[0] = DTENTRYSTART;
sp->header.nextindex = 1;
/* init freelist */
fsi = DTENTRYSTART + 1;
f = &sp->slot[fsi];
/* init free region of remaining area */
for (fsi++; fsi < DTROOTMAXSLOT; f++, fsi++)
f->next = fsi;
f->next = -1;
sp->header.freelist = DTENTRYSTART + 1;
sp->header.freecnt = DTROOTMAXSLOT - (DTENTRYSTART + 1);
*rmpp = rmp;
return 0;
}
/*
* dtDelete()
*
* function: delete the entry(s) referenced by a key.
*
* parameter:
*
* return:
*/
int dtDelete(tid_t tid,
struct inode *ip, struct component_name * key, ino_t * ino, int flag)
{
int rc = 0;
s64 bn;
struct metapage *mp, *imp;
dtpage_t *p;
int index;
struct btstack btstack;
struct dt_lock *dtlck;
struct tlock *tlck;
struct lv *lv;
int i;
struct ldtentry *ldtentry;
u8 *stbl;
u32 table_index, next_index;
struct metapage *nmp;
dtpage_t *np;
/*
* search for the entry to delete:
*
* dtSearch() returns (leaf page pinned, index at which to delete).
*/
if ((rc = dtSearch(ip, key, ino, &btstack, flag)))
return rc;
/* retrieve search result */
DT_GETSEARCH(ip, btstack.top, bn, mp, p, index);
/*
* We need to find put the index of the next entry into the
* directory index table in order to resume a readdir from this
* entry.
*/
if (DO_INDEX(ip)) {
stbl = DT_GETSTBL(p);
ldtentry = (struct ldtentry *) & p->slot[stbl[index]];
table_index = le32_to_cpu(ldtentry->index);
if (index == (p->header.nextindex - 1)) {
/*
* Last entry in this leaf page
*/
if ((p->header.flag & BT_ROOT)
|| (p->header.next == 0))
next_index = -1;
else {
/* Read next leaf page */
DT_GETPAGE(ip, le64_to_cpu(p->header.next),
nmp, PSIZE, np, rc);
if (rc)
next_index = -1;
else {
stbl = DT_GETSTBL(np);
ldtentry =
(struct ldtentry *) & np->
slot[stbl[0]];
next_index =
le32_to_cpu(ldtentry->index);
DT_PUTPAGE(nmp);
}
}
} else {
ldtentry =
(struct ldtentry *) & p->slot[stbl[index + 1]];
next_index = le32_to_cpu(ldtentry->index);
}
free_index(tid, ip, table_index, next_index);
}
/*
* the leaf page becomes empty, delete the page
*/
if (p->header.nextindex == 1) {
/* delete empty page */
rc = dtDeleteUp(tid, ip, mp, p, &btstack);
}
/*
* the leaf page has other entries remaining:
*
* delete the entry from the leaf page.
*/
else {
BT_MARK_DIRTY(mp, ip);
/*
* acquire a transaction lock on the leaf page
*/
tlck = txLock(tid, ip, mp, tlckDTREE | tlckENTRY);
dtlck = (struct dt_lock *) & tlck->lock;
/*
* Do not assume that dtlck->index will be zero. During a
* rename within a directory, this transaction may have
* modified this page already when adding the new entry.
*/
/* linelock header */
if (dtlck->index >= dtlck->maxcnt)
dtlck = (struct dt_lock *) txLinelock(dtlck);
lv = & dtlck->lv[dtlck->index];
lv->offset = 0;
lv->length = 1;
dtlck->index++;
/* linelock stbl of non-root leaf page */
if (!(p->header.flag & BT_ROOT)) {
if (dtlck->index >= dtlck->maxcnt)
dtlck = (struct dt_lock *) txLinelock(dtlck);
lv = & dtlck->lv[dtlck->index];
i = index >> L2DTSLOTSIZE;
lv->offset = p->header.stblindex + i;
lv->length =
((p->header.nextindex - 1) >> L2DTSLOTSIZE) -
i + 1;
dtlck->index++;
}
/* free the leaf entry */
dtDeleteEntry(p, index, &dtlck);
/*
* Update directory index table for entries moved in stbl
*/
if (DO_INDEX(ip) && index < p->header.nextindex) {
s64 lblock;
imp = NULL;
stbl = DT_GETSTBL(p);
for (i = index; i < p->header.nextindex; i++) {
ldtentry =
(struct ldtentry *) & p->slot[stbl[i]];
modify_index(tid, ip,
le32_to_cpu(ldtentry->index),
bn, i, &imp, &lblock);
}
if (imp)
release_metapage(imp);
}
DT_PUTPAGE(mp);
}
return rc;
}
/*
* dtDeleteUp()
*
* function:
* free empty pages as propagating deletion up the tree
*
* parameter:
*
* return:
*/
static int dtDeleteUp(tid_t tid, struct inode *ip,
struct metapage * fmp, dtpage_t * fp, struct btstack * btstack)
{
int rc = 0;
struct metapage *mp;
dtpage_t *p;
int index, nextindex;
int xlen;
struct btframe *parent;
struct dt_lock *dtlck;
struct tlock *tlck;
struct lv *lv;
struct pxd_lock *pxdlock;
int i;
/*
* keep the root leaf page which has become empty
*/
if (BT_IS_ROOT(fmp)) {
/*
* reset the root
*
* dtInitRoot() acquires txlock on the root
*/
dtInitRoot(tid, ip, PARENT(ip));
DT_PUTPAGE(fmp);
return 0;
}
/*
* free the non-root leaf page
*/
/*
* acquire a transaction lock on the page
*
* write FREEXTENT|NOREDOPAGE log record
* N.B. linelock is overlaid as freed extent descriptor, and
* the buffer page is freed;
*/
tlck = txMaplock(tid, ip, tlckDTREE | tlckFREE);
pxdlock = (struct pxd_lock *) & tlck->lock;
pxdlock->flag = mlckFREEPXD;
pxdlock->pxd = fp->header.self;
pxdlock->index = 1;
/* update sibling pointers */
if ((rc = dtRelink(tid, ip, fp))) {
BT_PUTPAGE(fmp);
return rc;
}
xlen = lengthPXD(&fp->header.self);
/* Free quota allocation. */
dquot_free_block(ip, xlen);
/* free/invalidate its buffer page */
discard_metapage(fmp);
/*
* propagate page deletion up the directory tree
*
* If the delete from the parent page makes it empty,
* continue all the way up the tree.
* stop if the root page is reached (which is never deleted) or
* if the entry deletion does not empty the page.
*/
while ((parent = BT_POP(btstack)) != NULL) {
/* pin the parent page <sp> */
DT_GETPAGE(ip, parent->bn, mp, PSIZE, p, rc);
if (rc)
return rc;
/*
* free the extent of the child page deleted
*/
index = parent->index;
/*
* delete the entry for the child page from parent
*/
nextindex = p->header.nextindex;
/*
* the parent has the single entry being deleted:
*
* free the parent page which has become empty.
*/
if (nextindex == 1) {
/*
* keep the root internal page which has become empty
*/
if (p->header.flag & BT_ROOT) {
/*
* reset the root
*
* dtInitRoot() acquires txlock on the root
*/
dtInitRoot(tid, ip, PARENT(ip));
DT_PUTPAGE(mp);
return 0;
}
/*
* free the parent page
*/
else {
/*
* acquire a transaction lock on the page
*
* write FREEXTENT|NOREDOPAGE log record
*/
tlck =
txMaplock(tid, ip,
tlckDTREE | tlckFREE);
pxdlock = (struct pxd_lock *) & tlck->lock;
pxdlock->flag = mlckFREEPXD;
pxdlock->pxd = p->header.self;
pxdlock->index = 1;
/* update sibling pointers */
if ((rc = dtRelink(tid, ip, p))) {
DT_PUTPAGE(mp);
return rc;
}
xlen = lengthPXD(&p->header.self);
/* Free quota allocation */
dquot_free_block(ip, xlen);
/* free/invalidate its buffer page */
discard_metapage(mp);
/* propagate up */
continue;
}
}
/*
* the parent has other entries remaining:
*
* delete the router entry from the parent page.
*/
BT_MARK_DIRTY(mp, ip);
/*
* acquire a transaction lock on the page
*
* action: router entry deletion
*/
tlck = txLock(tid, ip, mp, tlckDTREE | tlckENTRY);
dtlck = (struct dt_lock *) & tlck->lock;
/* linelock header */
if (dtlck->index >= dtlck->maxcnt)
dtlck = (struct dt_lock *) txLinelock(dtlck);
lv = & dtlck->lv[dtlck->index];
lv->offset = 0;
lv->length = 1;
dtlck->index++;
/* linelock stbl of non-root leaf page */
if (!(p->header.flag & BT_ROOT)) {
if (dtlck->index < dtlck->maxcnt)
lv++;
else {
dtlck = (struct dt_lock *) txLinelock(dtlck);
lv = & dtlck->lv[0];
}
i = index >> L2DTSLOTSIZE;
lv->offset = p->header.stblindex + i;
lv->length =
((p->header.nextindex - 1) >> L2DTSLOTSIZE) -
i + 1;
dtlck->index++;
}
/* free the router entry */
dtDeleteEntry(p, index, &dtlck);
/* reset key of new leftmost entry of level (for consistency) */
if (index == 0 &&
((p->header.flag & BT_ROOT) || p->header.prev == 0))
dtTruncateEntry(p, 0, &dtlck);
/* unpin the parent page */
DT_PUTPAGE(mp);
/* exit propagation up */
break;
}
if (!DO_INDEX(ip))
ip->i_size -= PSIZE;
return 0;
}
/*
* dtRelink()
*
* function:
* link around a freed page.
*
* parameter:
* fp: page to be freed
*
* return:
*/
static int dtRelink(tid_t tid, struct inode *ip, dtpage_t * p)
{
int rc;
struct metapage *mp;
s64 nextbn, prevbn;
struct tlock *tlck;
struct dt_lock *dtlck;
struct lv *lv;
nextbn = le64_to_cpu(p->header.next);
prevbn = le64_to_cpu(p->header.prev);
/* update prev pointer of the next page */
if (nextbn != 0) {
DT_GETPAGE(ip, nextbn, mp, PSIZE, p, rc);
if (rc)
return rc;
BT_MARK_DIRTY(mp, ip);
/*
* acquire a transaction lock on the next page
*
* action: update prev pointer;
*/
tlck = txLock(tid, ip, mp, tlckDTREE | tlckRELINK);
jfs_info("dtRelink nextbn: tlck = 0x%p, ip = 0x%p, mp=0x%p",
tlck, ip, mp);
dtlck = (struct dt_lock *) & tlck->lock;
/* linelock header */
if (dtlck->index >= dtlck->maxcnt)
dtlck = (struct dt_lock *) txLinelock(dtlck);
lv = & dtlck->lv[dtlck->index];
lv->offset = 0;
lv->length = 1;
dtlck->index++;
p->header.prev = cpu_to_le64(prevbn);
DT_PUTPAGE(mp);
}
/* update next pointer of the previous page */
if (prevbn != 0) {
DT_GETPAGE(ip, prevbn, mp, PSIZE, p, rc);
if (rc)
return rc;
BT_MARK_DIRTY(mp, ip);
/*
* acquire a transaction lock on the prev page
*
* action: update next pointer;
*/
tlck = txLock(tid, ip, mp, tlckDTREE | tlckRELINK);
jfs_info("dtRelink prevbn: tlck = 0x%p, ip = 0x%p, mp=0x%p",
tlck, ip, mp);
dtlck = (struct dt_lock *) & tlck->lock;
/* linelock header */
if (dtlck->index >= dtlck->maxcnt)
dtlck = (struct dt_lock *) txLinelock(dtlck);
lv = & dtlck->lv[dtlck->index];
lv->offset = 0;
lv->length = 1;
dtlck->index++;
p->header.next = cpu_to_le64(nextbn);
DT_PUTPAGE(mp);
}
return 0;
}
/*
* dtInitRoot()
*
* initialize directory root (inline in inode)
*/
void dtInitRoot(tid_t tid, struct inode *ip, u32 idotdot)
{
struct jfs_inode_info *jfs_ip = JFS_IP(ip);
dtroot_t *p;
int fsi;
struct dtslot *f;
struct tlock *tlck;
struct dt_lock *dtlck;
struct lv *lv;
u16 xflag_save;
/*
* If this was previously an non-empty directory, we need to remove
* the old directory table.
*/
if (DO_INDEX(ip)) {
if (!jfs_dirtable_inline(ip)) {
struct tblock *tblk = tid_to_tblock(tid);
/*
* We're playing games with the tid's xflag. If
* we're removing a regular file, the file's xtree
* is committed with COMMIT_PMAP, but we always
* commit the directories xtree with COMMIT_PWMAP.
*/
xflag_save = tblk->xflag;
tblk->xflag = 0;
/*
* xtTruncate isn't guaranteed to fully truncate
* the xtree. The caller needs to check i_size
* after committing the transaction to see if
* additional truncation is needed. The
* COMMIT_Stale flag tells caller that we
* initiated the truncation.
*/
xtTruncate(tid, ip, 0, COMMIT_PWMAP);
set_cflag(COMMIT_Stale, ip);
tblk->xflag = xflag_save;
} else
ip->i_size = 1;
jfs_ip->next_index = 2;
} else
ip->i_size = IDATASIZE;
/*
* acquire a transaction lock on the root
*
* action: directory initialization;
*/
tlck = txLock(tid, ip, (struct metapage *) & jfs_ip->bxflag,
tlckDTREE | tlckENTRY | tlckBTROOT);
dtlck = (struct dt_lock *) & tlck->lock;
/* linelock root */
ASSERT(dtlck->index == 0);
lv = & dtlck->lv[0];
lv->offset = 0;
lv->length = DTROOTMAXSLOT;
dtlck->index++;
p = &jfs_ip->i_dtroot;
p->header.flag = DXD_INDEX | BT_ROOT | BT_LEAF;
p->header.nextindex = 0;
/* init freelist */
fsi = 1;
f = &p->slot[fsi];
/* init data area of root */
for (fsi++; fsi < DTROOTMAXSLOT; f++, fsi++)
f->next = fsi;
f->next = -1;
p->header.freelist = 1;
p->header.freecnt = 8;
/* init '..' entry */
p->header.idotdot = cpu_to_le32(idotdot);
return;
}
/*
* add_missing_indices()
*
* function: Fix dtree page in which one or more entries has an invalid index.
* fsck.jfs should really fix this, but it currently does not.
* Called from jfs_readdir when bad index is detected.
*/
static int add_missing_indices(struct inode *inode, s64 bn)
{
struct ldtentry *d;
struct dt_lock *dtlck;
int i;
uint index;
struct lv *lv;
struct metapage *mp;
dtpage_t *p;
int rc = 0;
s8 *stbl;
tid_t tid;
struct tlock *tlck;
tid = txBegin(inode->i_sb, 0);
DT_GETPAGE(inode, bn, mp, PSIZE, p, rc);
if (rc) {
printk(KERN_ERR "DT_GETPAGE failed!\n");
goto end;
}
BT_MARK_DIRTY(mp, inode);
ASSERT(p->header.flag & BT_LEAF);
tlck = txLock(tid, inode, mp, tlckDTREE | tlckENTRY);
if (BT_IS_ROOT(mp))
tlck->type |= tlckBTROOT;
dtlck = (struct dt_lock *) &tlck->lock;
stbl = DT_GETSTBL(p);
for (i = 0; i < p->header.nextindex; i++) {
if (stbl[i] < 0) {
jfs_err("jfs: add_missing_indices: Invalid stbl[%d] = %d for inode %ld, block = %lld",
i, stbl[i], (long)inode->i_ino, (long long)bn);
rc = -EIO;
DT_PUTPAGE(mp);
txAbort(tid, 0);
goto end;
}
d = (struct ldtentry *) &p->slot[stbl[i]];
index = le32_to_cpu(d->index);
if ((index < 2) || (index >= JFS_IP(inode)->next_index)) {
d->index = cpu_to_le32(add_index(tid, inode, bn, i));
if (dtlck->index >= dtlck->maxcnt)
dtlck = (struct dt_lock *) txLinelock(dtlck);
lv = &dtlck->lv[dtlck->index];
lv->offset = stbl[i];
lv->length = 1;
dtlck->index++;
}
}
DT_PUTPAGE(mp);
(void) txCommit(tid, 1, &inode, 0);
end:
txEnd(tid);
return rc;
}
/*
* Buffer to hold directory entry info while traversing a dtree page
* before being fed to the filldir function
*/
struct jfs_dirent {
loff_t position;
int ino;
u16 name_len;
char name[];
};
/*
* function to determine next variable-sized jfs_dirent in buffer
*/
static inline struct jfs_dirent *next_jfs_dirent(struct jfs_dirent *dirent)
{
return (struct jfs_dirent *)
((char *)dirent +
((sizeof (struct jfs_dirent) + dirent->name_len + 1 +
sizeof (loff_t) - 1) &
~(sizeof (loff_t) - 1)));
}
/*
* jfs_readdir()
*
* function: read directory entries sequentially
* from the specified entry offset
*
* parameter:
*
* return: offset = (pn, index) of start entry
* of next jfs_readdir()/dtRead()
*/
int jfs_readdir(struct file *file, struct dir_context *ctx)
{
struct inode *ip = file_inode(file);
struct nls_table *codepage = JFS_SBI(ip->i_sb)->nls_tab;
int rc = 0;
loff_t dtpos; /* legacy OS/2 style position */
struct dtoffset {
s16 pn;
s16 index;
s32 unused;
} *dtoffset = (struct dtoffset *) &dtpos;
s64 bn;
struct metapage *mp;
dtpage_t *p;
int index;
s8 *stbl;
struct btstack btstack;
int i, next;
struct ldtentry *d;
struct dtslot *t;
int d_namleft, len, outlen;
unsigned long dirent_buf;
char *name_ptr;
u32 dir_index;
int do_index = 0;
uint loop_count = 0;
struct jfs_dirent *jfs_dirent;
int jfs_dirents;
int overflow, fix_page, page_fixed = 0;
static int unique_pos = 2; /* If we can't fix broken index */
if (ctx->pos == DIREND)
return 0;
if (DO_INDEX(ip)) {
/*
* persistent index is stored in directory entries.
* Special cases: 0 = .
* 1 = ..
* -1 = End of directory
*/
do_index = 1;
dir_index = (u32) ctx->pos;
/*
* NFSv4 reserves cookies 1 and 2 for . and .. so the value
* we return to the vfs is one greater than the one we use
* internally.
*/
if (dir_index)
dir_index--;
if (dir_index > 1) {
struct dir_table_slot dirtab_slot;
if (dtEmpty(ip) ||
(dir_index >= JFS_IP(ip)->next_index)) {
/* Stale position. Directory has shrunk */
ctx->pos = DIREND;
return 0;
}
repeat:
rc = read_index(ip, dir_index, &dirtab_slot);
if (rc) {
ctx->pos = DIREND;
return rc;
}
if (dirtab_slot.flag == DIR_INDEX_FREE) {
if (loop_count++ > JFS_IP(ip)->next_index) {
jfs_err("jfs_readdir detected infinite loop!");
ctx->pos = DIREND;
return 0;
}
dir_index = le32_to_cpu(dirtab_slot.addr2);
if (dir_index == -1) {
ctx->pos = DIREND;
return 0;
}
goto repeat;
}
bn = addressDTS(&dirtab_slot);
index = dirtab_slot.slot;
DT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
if (rc) {
ctx->pos = DIREND;
return 0;
}
if (p->header.flag & BT_INTERNAL) {
jfs_err("jfs_readdir: bad index table");
DT_PUTPAGE(mp);
ctx->pos = DIREND;
return 0;
}
} else {
if (dir_index == 0) {
/*
* self "."
*/
ctx->pos = 1;
if (!dir_emit(ctx, ".", 1, ip->i_ino, DT_DIR))
return 0;
}
/*
* parent ".."
*/
ctx->pos = 2;
if (!dir_emit(ctx, "..", 2, PARENT(ip), DT_DIR))
return 0;
/*
* Find first entry of left-most leaf
*/
if (dtEmpty(ip)) {
ctx->pos = DIREND;
return 0;
}
if ((rc = dtReadFirst(ip, &btstack)))
return rc;
DT_GETSEARCH(ip, btstack.top, bn, mp, p, index);
}
} else {
/*
* Legacy filesystem - OS/2 & Linux JFS < 0.3.6
*
* pn = 0; index = 1: First entry "."
* pn = 0; index = 2: Second entry ".."
* pn > 0: Real entries, pn=1 -> leftmost page
* pn = index = -1: No more entries
*/
dtpos = ctx->pos;
if (dtpos < 2) {
/* build "." entry */
ctx->pos = 1;
if (!dir_emit(ctx, ".", 1, ip->i_ino, DT_DIR))
return 0;
dtoffset->index = 2;
ctx->pos = dtpos;
}
if (dtoffset->pn == 0) {
if (dtoffset->index == 2) {
/* build ".." entry */
if (!dir_emit(ctx, "..", 2, PARENT(ip), DT_DIR))
return 0;
} else {
jfs_err("jfs_readdir called with invalid offset!");
}
dtoffset->pn = 1;
dtoffset->index = 0;
ctx->pos = dtpos;
}
if (dtEmpty(ip)) {
ctx->pos = DIREND;
return 0;
}
if ((rc = dtReadNext(ip, &ctx->pos, &btstack))) {
jfs_err("jfs_readdir: unexpected rc = %d from dtReadNext",
rc);
ctx->pos = DIREND;
return 0;
}
/* get start leaf page and index */
DT_GETSEARCH(ip, btstack.top, bn, mp, p, index);
/* offset beyond directory eof ? */
if (bn < 0) {
ctx->pos = DIREND;
return 0;
}
}
dirent_buf = __get_free_page(GFP_KERNEL);
if (dirent_buf == 0) {
DT_PUTPAGE(mp);
jfs_warn("jfs_readdir: __get_free_page failed!");
ctx->pos = DIREND;
return -ENOMEM;
}
while (1) {
jfs_dirent = (struct jfs_dirent *) dirent_buf;
jfs_dirents = 0;
overflow = fix_page = 0;
stbl = DT_GETSTBL(p);
for (i = index; i < p->header.nextindex; i++) {
if (stbl[i] < 0) {
jfs_err("JFS: Invalid stbl[%d] = %d for inode %ld, block = %lld",
i, stbl[i], (long)ip->i_ino, (long long)bn);
free_page(dirent_buf);
DT_PUTPAGE(mp);
return -EIO;
}
d = (struct ldtentry *) & p->slot[stbl[i]];
if (((long) jfs_dirent + d->namlen + 1) >
(dirent_buf + PAGE_SIZE)) {
/* DBCS codepages could overrun dirent_buf */
index = i;
overflow = 1;
break;
}
d_namleft = d->namlen;
name_ptr = jfs_dirent->name;
jfs_dirent->ino = le32_to_cpu(d->inumber);
if (do_index) {
len = min(d_namleft, DTLHDRDATALEN);
jfs_dirent->position = le32_to_cpu(d->index);
/*
* d->index should always be valid, but it
* isn't. fsck.jfs doesn't create the
* directory index for the lost+found
* directory. Rather than let it go,
* we can try to fix it.
*/
if ((jfs_dirent->position < 2) ||
(jfs_dirent->position >=
JFS_IP(ip)->next_index)) {
if (!page_fixed && !isReadOnly(ip)) {
fix_page = 1;
/*
* setting overflow and setting
* index to i will cause the
* same page to be processed
* again starting here
*/
overflow = 1;
index = i;
break;
}
jfs_dirent->position = unique_pos++;
}
/*
* We add 1 to the index because we may
* use a value of 2 internally, and NFSv4
* doesn't like that.
*/
jfs_dirent->position++;
} else {
jfs_dirent->position = dtpos;
len = min(d_namleft, DTLHDRDATALEN_LEGACY);
}
/* copy the name of head/only segment */
outlen = jfs_strfromUCS_le(name_ptr, d->name, len,
codepage);
jfs_dirent->name_len = outlen;
/* copy name in the additional segment(s) */
next = d->next;
while (next >= 0) {
t = (struct dtslot *) & p->slot[next];
name_ptr += outlen;
d_namleft -= len;
/* Sanity Check */
if (d_namleft == 0) {
jfs_error(ip->i_sb,
"JFS:Dtree error: ino = %ld, bn=%lld, index = %d\n",
(long)ip->i_ino,
(long long)bn,
i);
goto skip_one;
}
len = min(d_namleft, DTSLOTDATALEN);
outlen = jfs_strfromUCS_le(name_ptr, t->name,
len, codepage);
jfs_dirent->name_len += outlen;
next = t->next;
}
jfs_dirents++;
jfs_dirent = next_jfs_dirent(jfs_dirent);
skip_one:
if (!do_index)
dtoffset->index++;
}
if (!overflow) {
/* Point to next leaf page */
if (p->header.flag & BT_ROOT)
bn = 0;
else {
bn = le64_to_cpu(p->header.next);
index = 0;
/* update offset (pn:index) for new page */
if (!do_index) {
dtoffset->pn++;
dtoffset->index = 0;
}
}
page_fixed = 0;
}
/* unpin previous leaf page */
DT_PUTPAGE(mp);
jfs_dirent = (struct jfs_dirent *) dirent_buf;
while (jfs_dirents--) {
ctx->pos = jfs_dirent->position;
if (!dir_emit(ctx, jfs_dirent->name,
jfs_dirent->name_len,
jfs_dirent->ino, DT_UNKNOWN))
goto out;
jfs_dirent = next_jfs_dirent(jfs_dirent);
}
if (fix_page) {
if ((rc = add_missing_indices(ip, bn)))
goto out;
page_fixed = 1;
}
if (!overflow && (bn == 0)) {
ctx->pos = DIREND;
break;
}
DT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
if (rc) {
free_page(dirent_buf);
return rc;
}
}
out:
free_page(dirent_buf);
return rc;
}
/*
* dtReadFirst()
*
* function: get the leftmost page of the directory
*/
static int dtReadFirst(struct inode *ip, struct btstack * btstack)
{
int rc = 0;
s64 bn;
int psize = 288; /* initial in-line directory */
struct metapage *mp;
dtpage_t *p;
s8 *stbl;
struct btframe *btsp;
pxd_t *xd;
BT_CLR(btstack); /* reset stack */
/*
* descend leftmost path of the tree
*
* by convention, root bn = 0.
*/
for (bn = 0;;) {
DT_GETPAGE(ip, bn, mp, psize, p, rc);
if (rc)
return rc;
/*
* leftmost leaf page
*/
if (p->header.flag & BT_LEAF) {
/* return leftmost entry */
btsp = btstack->top;
btsp->bn = bn;
btsp->index = 0;
btsp->mp = mp;
return 0;
}
/*
* descend down to leftmost child page
*/
if (BT_STACK_FULL(btstack)) {
DT_PUTPAGE(mp);
jfs_error(ip->i_sb, "btstack overrun\n");
BT_STACK_DUMP(btstack);
return -EIO;
}
/* push (bn, index) of the parent page/entry */
BT_PUSH(btstack, bn, 0);
/* get the leftmost entry */
stbl = DT_GETSTBL(p);
if (stbl[0] < 0) {
DT_PUTPAGE(mp);
jfs_error(ip->i_sb, "stbl[0] out of bound\n");
return -EIO;
}
xd = (pxd_t *) & p->slot[stbl[0]];
/* get the child page block address */
bn = addressPXD(xd);
psize = lengthPXD(xd) << JFS_SBI(ip->i_sb)->l2bsize;
/* unpin the parent page */
DT_PUTPAGE(mp);
}
}
/*
* dtReadNext()
*
* function: get the page of the specified offset (pn:index)
*
* return: if (offset > eof), bn = -1;
*
* note: if index > nextindex of the target leaf page,
* start with 1st entry of next leaf page;
*/
static int dtReadNext(struct inode *ip, loff_t * offset,
struct btstack * btstack)
{
int rc = 0;
struct dtoffset {
s16 pn;
s16 index;
s32 unused;
} *dtoffset = (struct dtoffset *) offset;
s64 bn;
struct metapage *mp;
dtpage_t *p;
int index;
int pn;
s8 *stbl;
struct btframe *btsp, *parent;
pxd_t *xd;
/*
* get leftmost leaf page pinned
*/
if ((rc = dtReadFirst(ip, btstack)))
return rc;
/* get leaf page */
DT_GETSEARCH(ip, btstack->top, bn, mp, p, index);
/* get the start offset (pn:index) */
pn = dtoffset->pn - 1; /* Now pn = 0 represents leftmost leaf */
index = dtoffset->index;
/* start at leftmost page ? */
if (pn == 0) {
/* offset beyond eof ? */
if (index < p->header.nextindex)
goto out;
if (p->header.flag & BT_ROOT) {
bn = -1;
goto out;
}
/* start with 1st entry of next leaf page */
dtoffset->pn++;
dtoffset->index = index = 0;
goto a;
}
/* start at non-leftmost page: scan parent pages for large pn */
if (p->header.flag & BT_ROOT) {
bn = -1;
goto out;
}
/* start after next leaf page ? */
if (pn > 1)
goto b;
/* get leaf page pn = 1 */
a:
bn = le64_to_cpu(p->header.next);
/* unpin leaf page */
DT_PUTPAGE(mp);
/* offset beyond eof ? */
if (bn == 0) {
bn = -1;
goto out;
}
goto c;
/*
* scan last internal page level to get target leaf page
*/
b:
/* unpin leftmost leaf page */
DT_PUTPAGE(mp);
/* get left most parent page */
btsp = btstack->top;
parent = btsp - 1;
bn = parent->bn;
DT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
if (rc)
return rc;
/* scan parent pages at last internal page level */
while (pn >= p->header.nextindex) {
pn -= p->header.nextindex;
/* get next parent page address */
bn = le64_to_cpu(p->header.next);
/* unpin current parent page */
DT_PUTPAGE(mp);
/* offset beyond eof ? */
if (bn == 0) {
bn = -1;
goto out;
}
/* get next parent page */
DT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
if (rc)
return rc;
/* update parent page stack frame */
parent->bn = bn;
}
/* get leaf page address */
stbl = DT_GETSTBL(p);
xd = (pxd_t *) & p->slot[stbl[pn]];
bn = addressPXD(xd);
/* unpin parent page */
DT_PUTPAGE(mp);
/*
* get target leaf page
*/
c:
DT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
if (rc)
return rc;
/*
* leaf page has been completed:
* start with 1st entry of next leaf page
*/
if (index >= p->header.nextindex) {
bn = le64_to_cpu(p->header.next);
/* unpin leaf page */
DT_PUTPAGE(mp);
/* offset beyond eof ? */
if (bn == 0) {
bn = -1;
goto out;
}
/* get next leaf page */
DT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
if (rc)
return rc;
/* start with 1st entry of next leaf page */
dtoffset->pn++;
dtoffset->index = 0;
}
out:
/* return target leaf page pinned */
btsp = btstack->top;
btsp->bn = bn;
btsp->index = dtoffset->index;
btsp->mp = mp;
return 0;
}
/*
* dtCompare()
*
* function: compare search key with an internal entry
*
* return:
* < 0 if k is < record
* = 0 if k is = record
* > 0 if k is > record
*/
static int dtCompare(struct component_name * key, /* search key */
dtpage_t * p, /* directory page */
int si)
{ /* entry slot index */
wchar_t *kname;
__le16 *name;
int klen, namlen, len, rc;
struct idtentry *ih;
struct dtslot *t;
/*
* force the left-most key on internal pages, at any level of
* the tree, to be less than any search key.
* this obviates having to update the leftmost key on an internal
* page when the user inserts a new key in the tree smaller than
* anything that has been stored.
*
* (? if/when dtSearch() narrows down to 1st entry (index = 0),
* at any internal page at any level of the tree,
* it descends to child of the entry anyway -
* ? make the entry as min size dummy entry)
*
* if (e->index == 0 && h->prevpg == P_INVALID && !(h->flags & BT_LEAF))
* return (1);
*/
kname = key->name;
klen = key->namlen;
ih = (struct idtentry *) & p->slot[si];
si = ih->next;
name = ih->name;
namlen = ih->namlen;
len = min(namlen, DTIHDRDATALEN);
/* compare with head/only segment */
len = min(klen, len);
if ((rc = UniStrncmp_le(kname, name, len)))
return rc;
klen -= len;
namlen -= len;
/* compare with additional segment(s) */
kname += len;
while (klen > 0 && namlen > 0) {
/* compare with next name segment */
t = (struct dtslot *) & p->slot[si];
len = min(namlen, DTSLOTDATALEN);
len = min(klen, len);
name = t->name;
if ((rc = UniStrncmp_le(kname, name, len)))
return rc;
klen -= len;
namlen -= len;
kname += len;
si = t->next;
}
return (klen - namlen);
}
/*
* ciCompare()
*
* function: compare search key with an (leaf/internal) entry
*
* return:
* < 0 if k is < record
* = 0 if k is = record
* > 0 if k is > record
*/
static int ciCompare(struct component_name * key, /* search key */
dtpage_t * p, /* directory page */
int si, /* entry slot index */
int flag)
{
wchar_t *kname, x;
__le16 *name;
int klen, namlen, len, rc;
struct ldtentry *lh;
struct idtentry *ih;
struct dtslot *t;
int i;
/*
* force the left-most key on internal pages, at any level of
* the tree, to be less than any search key.
* this obviates having to update the leftmost key on an internal
* page when the user inserts a new key in the tree smaller than
* anything that has been stored.
*
* (? if/when dtSearch() narrows down to 1st entry (index = 0),
* at any internal page at any level of the tree,
* it descends to child of the entry anyway -
* ? make the entry as min size dummy entry)
*
* if (e->index == 0 && h->prevpg == P_INVALID && !(h->flags & BT_LEAF))
* return (1);
*/
kname = key->name;
klen = key->namlen;
/*
* leaf page entry
*/
if (p->header.flag & BT_LEAF) {
lh = (struct ldtentry *) & p->slot[si];
si = lh->next;
name = lh->name;
namlen = lh->namlen;
if (flag & JFS_DIR_INDEX)
len = min(namlen, DTLHDRDATALEN);
else
len = min(namlen, DTLHDRDATALEN_LEGACY);
}
/*
* internal page entry
*/
else {
ih = (struct idtentry *) & p->slot[si];
si = ih->next;
name = ih->name;
namlen = ih->namlen;
len = min(namlen, DTIHDRDATALEN);
}
/* compare with head/only segment */
len = min(klen, len);
for (i = 0; i < len; i++, kname++, name++) {
/* only uppercase if case-insensitive support is on */
if ((flag & JFS_OS2) == JFS_OS2)
x = UniToupper(le16_to_cpu(*name));
else
x = le16_to_cpu(*name);
if ((rc = *kname - x))
return rc;
}
klen -= len;
namlen -= len;
/* compare with additional segment(s) */
while (klen > 0 && namlen > 0) {
/* compare with next name segment */
t = (struct dtslot *) & p->slot[si];
len = min(namlen, DTSLOTDATALEN);
len = min(klen, len);
name = t->name;
for (i = 0; i < len; i++, kname++, name++) {
/* only uppercase if case-insensitive support is on */
if ((flag & JFS_OS2) == JFS_OS2)
x = UniToupper(le16_to_cpu(*name));
else
x = le16_to_cpu(*name);
if ((rc = *kname - x))
return rc;
}
klen -= len;
namlen -= len;
si = t->next;
}
return (klen - namlen);
}
/*
* ciGetLeafPrefixKey()
*
* function: compute prefix of suffix compression
* from two adjacent leaf entries
* across page boundary
*
* return: non-zero on error
*
*/
static int ciGetLeafPrefixKey(dtpage_t * lp, int li, dtpage_t * rp,
int ri, struct component_name * key, int flag)
{
int klen, namlen;
wchar_t *pl, *pr, *kname;
struct component_name lkey;
struct component_name rkey;
lkey.name = kmalloc_array(JFS_NAME_MAX + 1, sizeof(wchar_t),
GFP_KERNEL);
if (lkey.name == NULL)
return -ENOMEM;
rkey.name = kmalloc_array(JFS_NAME_MAX + 1, sizeof(wchar_t),
GFP_KERNEL);
if (rkey.name == NULL) {
kfree(lkey.name);
return -ENOMEM;
}
/* get left and right key */
dtGetKey(lp, li, &lkey, flag);
lkey.name[lkey.namlen] = 0;
if ((flag & JFS_OS2) == JFS_OS2)
ciToUpper(&lkey);
dtGetKey(rp, ri, &rkey, flag);
rkey.name[rkey.namlen] = 0;
if ((flag & JFS_OS2) == JFS_OS2)
ciToUpper(&rkey);
/* compute prefix */
klen = 0;
kname = key->name;
namlen = min(lkey.namlen, rkey.namlen);
for (pl = lkey.name, pr = rkey.name;
namlen; pl++, pr++, namlen--, klen++, kname++) {
*kname = *pr;
if (*pl != *pr) {
key->namlen = klen + 1;
goto free_names;
}
}
/* l->namlen <= r->namlen since l <= r */
if (lkey.namlen < rkey.namlen) {
*kname = *pr;
key->namlen = klen + 1;
} else /* l->namelen == r->namelen */
key->namlen = klen;
free_names:
kfree(lkey.name);
kfree(rkey.name);
return 0;
}
/*
* dtGetKey()
*
* function: get key of the entry
*/
static void dtGetKey(dtpage_t * p, int i, /* entry index */
struct component_name * key, int flag)
{
int si;
s8 *stbl;
struct ldtentry *lh;
struct idtentry *ih;
struct dtslot *t;
int namlen, len;
wchar_t *kname;
__le16 *name;
/* get entry */
stbl = DT_GETSTBL(p);
si = stbl[i];
if (p->header.flag & BT_LEAF) {
lh = (struct ldtentry *) & p->slot[si];
si = lh->next;
namlen = lh->namlen;
name = lh->name;
if (flag & JFS_DIR_INDEX)
len = min(namlen, DTLHDRDATALEN);
else
len = min(namlen, DTLHDRDATALEN_LEGACY);
} else {
ih = (struct idtentry *) & p->slot[si];
si = ih->next;
namlen = ih->namlen;
name = ih->name;
len = min(namlen, DTIHDRDATALEN);
}
key->namlen = namlen;
kname = key->name;
/*
* move head/only segment
*/
UniStrncpy_from_le(kname, name, len);
/*
* move additional segment(s)
*/
while (si >= 0) {
/* get next segment */
t = &p->slot[si];
kname += len;
namlen -= len;
len = min(namlen, DTSLOTDATALEN);
UniStrncpy_from_le(kname, t->name, len);
si = t->next;
}
}
/*
* dtInsertEntry()
*
* function: allocate free slot(s) and
* write a leaf/internal entry
*
* return: entry slot index
*/
static void dtInsertEntry(dtpage_t * p, int index, struct component_name * key,
ddata_t * data, struct dt_lock ** dtlock)
{
struct dtslot *h, *t;
struct ldtentry *lh = NULL;
struct idtentry *ih = NULL;
int hsi, fsi, klen, len, nextindex;
wchar_t *kname;
__le16 *name;
s8 *stbl;
pxd_t *xd;
struct dt_lock *dtlck = *dtlock;
struct lv *lv;
int xsi, n;
s64 bn = 0;
struct metapage *mp = NULL;
klen = key->namlen;
kname = key->name;
/* allocate a free slot */
hsi = fsi = p->header.freelist;
h = &p->slot[fsi];
p->header.freelist = h->next;
--p->header.freecnt;
/* open new linelock */
if (dtlck->index >= dtlck->maxcnt)
dtlck = (struct dt_lock *) txLinelock(dtlck);
lv = & dtlck->lv[dtlck->index];
lv->offset = hsi;
/* write head/only segment */
if (p->header.flag & BT_LEAF) {
lh = (struct ldtentry *) h;
lh->next = h->next;
lh->inumber = cpu_to_le32(data->leaf.ino);
lh->namlen = klen;
name = lh->name;
if (data->leaf.ip) {
len = min(klen, DTLHDRDATALEN);
if (!(p->header.flag & BT_ROOT))
bn = addressPXD(&p->header.self);
lh->index = cpu_to_le32(add_index(data->leaf.tid,
data->leaf.ip,
bn, index));
} else
len = min(klen, DTLHDRDATALEN_LEGACY);
} else {
ih = (struct idtentry *) h;
ih->next = h->next;
xd = (pxd_t *) ih;
*xd = data->xd;
ih->namlen = klen;
name = ih->name;
len = min(klen, DTIHDRDATALEN);
}
UniStrncpy_to_le(name, kname, len);
n = 1;
xsi = hsi;
/* write additional segment(s) */
t = h;
klen -= len;
while (klen) {
/* get free slot */
fsi = p->header.freelist;
t = &p->slot[fsi];
p->header.freelist = t->next;
--p->header.freecnt;
/* is next slot contiguous ? */
if (fsi != xsi + 1) {
/* close current linelock */
lv->length = n;
dtlck->index++;
/* open new linelock */
if (dtlck->index < dtlck->maxcnt)
lv++;
else {
dtlck = (struct dt_lock *) txLinelock(dtlck);
lv = & dtlck->lv[0];
}
lv->offset = fsi;
n = 0;
}
kname += len;
len = min(klen, DTSLOTDATALEN);
UniStrncpy_to_le(t->name, kname, len);
n++;
xsi = fsi;
klen -= len;
}
/* close current linelock */
lv->length = n;
dtlck->index++;
*dtlock = dtlck;
/* terminate last/only segment */
if (h == t) {
/* single segment entry */
if (p->header.flag & BT_LEAF)
lh->next = -1;
else
ih->next = -1;
} else
/* multi-segment entry */
t->next = -1;
/* if insert into middle, shift right succeeding entries in stbl */
stbl = DT_GETSTBL(p);
nextindex = p->header.nextindex;
if (index < nextindex) {
memmove(stbl + index + 1, stbl + index, nextindex - index);
if ((p->header.flag & BT_LEAF) && data->leaf.ip) {
s64 lblock;
/*
* Need to update slot number for entries that moved
* in the stbl
*/
mp = NULL;
for (n = index + 1; n <= nextindex; n++) {
lh = (struct ldtentry *) & (p->slot[stbl[n]]);
modify_index(data->leaf.tid, data->leaf.ip,
le32_to_cpu(lh->index), bn, n,
&mp, &lblock);
}
if (mp)
release_metapage(mp);
}
}
stbl[index] = hsi;
/* advance next available entry index of stbl */
++p->header.nextindex;
}
/*
* dtMoveEntry()
*
* function: move entries from split/left page to new/right page
*
* nextindex of dst page and freelist/freecnt of both pages
* are updated.
*/
static void dtMoveEntry(dtpage_t * sp, int si, dtpage_t * dp,
struct dt_lock ** sdtlock, struct dt_lock ** ddtlock,
int do_index)
{
int ssi, next; /* src slot index */
int di; /* dst entry index */
int dsi; /* dst slot index */
s8 *sstbl, *dstbl; /* sorted entry table */
int snamlen, len;
struct ldtentry *slh, *dlh = NULL;
struct idtentry *sih, *dih = NULL;
struct dtslot *h, *s, *d;
struct dt_lock *sdtlck = *sdtlock, *ddtlck = *ddtlock;
struct lv *slv, *dlv;
int xssi, ns, nd;
int sfsi;
sstbl = (s8 *) & sp->slot[sp->header.stblindex];
dstbl = (s8 *) & dp->slot[dp->header.stblindex];
dsi = dp->header.freelist; /* first (whole page) free slot */
sfsi = sp->header.freelist;
/* linelock destination entry slot */
dlv = & ddtlck->lv[ddtlck->index];
dlv->offset = dsi;
/* linelock source entry slot */
slv = & sdtlck->lv[sdtlck->index];
slv->offset = sstbl[si];
xssi = slv->offset - 1;
/*
* move entries
*/
ns = nd = 0;
for (di = 0; si < sp->header.nextindex; si++, di++) {
ssi = sstbl[si];
dstbl[di] = dsi;
/* is next slot contiguous ? */
if (ssi != xssi + 1) {
/* close current linelock */
slv->length = ns;
sdtlck->index++;
/* open new linelock */
if (sdtlck->index < sdtlck->maxcnt)
slv++;
else {
sdtlck = (struct dt_lock *) txLinelock(sdtlck);
slv = & sdtlck->lv[0];
}
slv->offset = ssi;
ns = 0;
}
/*
* move head/only segment of an entry
*/
/* get dst slot */
h = d = &dp->slot[dsi];
/* get src slot and move */
s = &sp->slot[ssi];
if (sp->header.flag & BT_LEAF) {
/* get source entry */
slh = (struct ldtentry *) s;
dlh = (struct ldtentry *) h;
snamlen = slh->namlen;
if (do_index) {
len = min(snamlen, DTLHDRDATALEN);
dlh->index = slh->index; /* little-endian */
} else
len = min(snamlen, DTLHDRDATALEN_LEGACY);
memcpy(dlh, slh, 6 + len * 2);
next = slh->next;
/* update dst head/only segment next field */
dsi++;
dlh->next = dsi;
} else {
sih = (struct idtentry *) s;
snamlen = sih->namlen;
len = min(snamlen, DTIHDRDATALEN);
dih = (struct idtentry *) h;
memcpy(dih, sih, 10 + len * 2);
next = sih->next;
dsi++;
dih->next = dsi;
}
/* free src head/only segment */
s->next = sfsi;
s->cnt = 1;
sfsi = ssi;
ns++;
nd++;
xssi = ssi;
/*
* move additional segment(s) of the entry
*/
snamlen -= len;
while ((ssi = next) >= 0) {
/* is next slot contiguous ? */
if (ssi != xssi + 1) {
/* close current linelock */
slv->length = ns;
sdtlck->index++;
/* open new linelock */
if (sdtlck->index < sdtlck->maxcnt)
slv++;
else {
sdtlck =
(struct dt_lock *)
txLinelock(sdtlck);
slv = & sdtlck->lv[0];
}
slv->offset = ssi;
ns = 0;
}
/* get next source segment */
s = &sp->slot[ssi];
/* get next destination free slot */
d++;
len = min(snamlen, DTSLOTDATALEN);
UniStrncpy_le(d->name, s->name, len);
ns++;
nd++;
xssi = ssi;
dsi++;
d->next = dsi;
/* free source segment */
next = s->next;
s->next = sfsi;
s->cnt = 1;
sfsi = ssi;
snamlen -= len;
} /* end while */
/* terminate dst last/only segment */
if (h == d) {
/* single segment entry */
if (dp->header.flag & BT_LEAF)
dlh->next = -1;
else
dih->next = -1;
} else
/* multi-segment entry */
d->next = -1;
} /* end for */
/* close current linelock */
slv->length = ns;
sdtlck->index++;
*sdtlock = sdtlck;
dlv->length = nd;
ddtlck->index++;
*ddtlock = ddtlck;
/* update source header */
sp->header.freelist = sfsi;
sp->header.freecnt += nd;
/* update destination header */
dp->header.nextindex = di;
dp->header.freelist = dsi;
dp->header.freecnt -= nd;
}
/*
* dtDeleteEntry()
*
* function: free a (leaf/internal) entry
*
* log freelist header, stbl, and each segment slot of entry
* (even though last/only segment next field is modified,
* physical image logging requires all segment slots of
* the entry logged to avoid applying previous updates
* to the same slots)
*/
static void dtDeleteEntry(dtpage_t * p, int fi, struct dt_lock ** dtlock)
{
int fsi; /* free entry slot index */
s8 *stbl;
struct dtslot *t;
int si, freecnt;
struct dt_lock *dtlck = *dtlock;
struct lv *lv;
int xsi, n;
/* get free entry slot index */
stbl = DT_GETSTBL(p);
fsi = stbl[fi];
/* open new linelock */
if (dtlck->index >= dtlck->maxcnt)
dtlck = (struct dt_lock *) txLinelock(dtlck);
lv = & dtlck->lv[dtlck->index];
lv->offset = fsi;
/* get the head/only segment */
t = &p->slot[fsi];
if (p->header.flag & BT_LEAF)
si = ((struct ldtentry *) t)->next;
else
si = ((struct idtentry *) t)->next;
t->next = si;
t->cnt = 1;
n = freecnt = 1;
xsi = fsi;
/* find the last/only segment */
while (si >= 0) {
/* is next slot contiguous ? */
if (si != xsi + 1) {
/* close current linelock */
lv->length = n;
dtlck->index++;
/* open new linelock */
if (dtlck->index < dtlck->maxcnt)
lv++;
else {
dtlck = (struct dt_lock *) txLinelock(dtlck);
lv = & dtlck->lv[0];
}
lv->offset = si;
n = 0;
}
n++;
xsi = si;
freecnt++;
t = &p->slot[si];
t->cnt = 1;
si = t->next;
}
/* close current linelock */
lv->length = n;
dtlck->index++;
*dtlock = dtlck;
/* update freelist */
t->next = p->header.freelist;
p->header.freelist = fsi;
p->header.freecnt += freecnt;
/* if delete from middle,
* shift left the succedding entries in the stbl
*/
si = p->header.nextindex;
if (fi < si - 1)
memmove(&stbl[fi], &stbl[fi + 1], si - fi - 1);
p->header.nextindex--;
}
/*
* dtTruncateEntry()
*
* function: truncate a (leaf/internal) entry
*
* log freelist header, stbl, and each segment slot of entry
* (even though last/only segment next field is modified,
* physical image logging requires all segment slots of
* the entry logged to avoid applying previous updates
* to the same slots)
*/
static void dtTruncateEntry(dtpage_t * p, int ti, struct dt_lock ** dtlock)
{
int tsi; /* truncate entry slot index */
s8 *stbl;
struct dtslot *t;
int si, freecnt;
struct dt_lock *dtlck = *dtlock;
struct lv *lv;
int fsi, xsi, n;
/* get free entry slot index */
stbl = DT_GETSTBL(p);
tsi = stbl[ti];
/* open new linelock */
if (dtlck->index >= dtlck->maxcnt)
dtlck = (struct dt_lock *) txLinelock(dtlck);
lv = & dtlck->lv[dtlck->index];
lv->offset = tsi;
/* get the head/only segment */
t = &p->slot[tsi];
ASSERT(p->header.flag & BT_INTERNAL);
((struct idtentry *) t)->namlen = 0;
si = ((struct idtentry *) t)->next;
((struct idtentry *) t)->next = -1;
n = 1;
freecnt = 0;
fsi = si;
xsi = tsi;
/* find the last/only segment */
while (si >= 0) {
/* is next slot contiguous ? */
if (si != xsi + 1) {
/* close current linelock */
lv->length = n;
dtlck->index++;
/* open new linelock */
if (dtlck->index < dtlck->maxcnt)
lv++;
else {
dtlck = (struct dt_lock *) txLinelock(dtlck);
lv = & dtlck->lv[0];
}
lv->offset = si;
n = 0;
}
n++;
xsi = si;
freecnt++;
t = &p->slot[si];
t->cnt = 1;
si = t->next;
}
/* close current linelock */
lv->length = n;
dtlck->index++;
*dtlock = dtlck;
/* update freelist */
if (freecnt == 0)
return;
t->next = p->header.freelist;
p->header.freelist = fsi;
p->header.freecnt += freecnt;
}
/*
* dtLinelockFreelist()
*/
static void dtLinelockFreelist(dtpage_t * p, /* directory page */
int m, /* max slot index */
struct dt_lock ** dtlock)
{
int fsi; /* free entry slot index */
struct dtslot *t;
int si;
struct dt_lock *dtlck = *dtlock;
struct lv *lv;
int xsi, n;
/* get free entry slot index */
fsi = p->header.freelist;
/* open new linelock */
if (dtlck->index >= dtlck->maxcnt)
dtlck = (struct dt_lock *) txLinelock(dtlck);
lv = & dtlck->lv[dtlck->index];
lv->offset = fsi;
n = 1;
xsi = fsi;
t = &p->slot[fsi];
si = t->next;
/* find the last/only segment */
while (si < m && si >= 0) {
/* is next slot contiguous ? */
if (si != xsi + 1) {
/* close current linelock */
lv->length = n;
dtlck->index++;
/* open new linelock */
if (dtlck->index < dtlck->maxcnt)
lv++;
else {
dtlck = (struct dt_lock *) txLinelock(dtlck);
lv = & dtlck->lv[0];
}
lv->offset = si;
n = 0;
}
n++;
xsi = si;
t = &p->slot[si];
si = t->next;
}
/* close current linelock */
lv->length = n;
dtlck->index++;
*dtlock = dtlck;
}
/*
* NAME: dtModify
*
* FUNCTION: Modify the inode number part of a directory entry
*
* PARAMETERS:
* tid - Transaction id
* ip - Inode of parent directory
* key - Name of entry to be modified
* orig_ino - Original inode number expected in entry
* new_ino - New inode number to put into entry
* flag - JFS_RENAME
*
* RETURNS:
* -ESTALE - If entry found does not match orig_ino passed in
* -ENOENT - If no entry can be found to match key
* 0 - If successfully modified entry
*/
int dtModify(tid_t tid, struct inode *ip,
struct component_name * key, ino_t * orig_ino, ino_t new_ino, int flag)
{
int rc;
s64 bn;
struct metapage *mp;
dtpage_t *p;
int index;
struct btstack btstack;
struct tlock *tlck;
struct dt_lock *dtlck;
struct lv *lv;
s8 *stbl;
int entry_si; /* entry slot index */
struct ldtentry *entry;
/*
* search for the entry to modify:
*
* dtSearch() returns (leaf page pinned, index at which to modify).
*/
if ((rc = dtSearch(ip, key, orig_ino, &btstack, flag)))
return rc;
/* retrieve search result */
DT_GETSEARCH(ip, btstack.top, bn, mp, p, index);
BT_MARK_DIRTY(mp, ip);
/*
* acquire a transaction lock on the leaf page of named entry
*/
tlck = txLock(tid, ip, mp, tlckDTREE | tlckENTRY);
dtlck = (struct dt_lock *) & tlck->lock;
/* get slot index of the entry */
stbl = DT_GETSTBL(p);
entry_si = stbl[index];
/* linelock entry */
ASSERT(dtlck->index == 0);
lv = & dtlck->lv[0];
lv->offset = entry_si;
lv->length = 1;
dtlck->index++;
/* get the head/only segment */
entry = (struct ldtentry *) & p->slot[entry_si];
/* substitute the inode number of the entry */
entry->inumber = cpu_to_le32(new_ino);
/* unpin the leaf page */
DT_PUTPAGE(mp);
return 0;
}
] |
| 107/2 | 2026/03/16 15:51 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 94176 output: 44 thoughts: 111 Thoughts: Okay, here's my interpretation of this code snippet, filtered through my expert lens: **Deconstructing the Page Retrieval Macro** Alright, let's break this down. Seeing `DT_GETPAGE` immediately flags a Data Tree context, likely within a filesystem, given the `jfs` connection. The macro definition, specifically the `#define`, is what interests me here. It's a key abstraction. `DT_GETPAGE` is clearly a wrapper. It's calling `BT_GETPAGE`, and that's the real workhorse. I'm familiar with this pattern; it isolates the specific filesystem-level details from the higher-level data tree logic. We get the block address (`BN`), the memory pointer (`MP`), the size (`SIZE`), and the buffer pointer (`P`). The return code, (`RC`), is also important. The `dtpage_t` type is a specialized page structure that should be well-defined in the relevant code. The inclusion of `i_dtroot` at the end of the `BT_GETPAGE` call strongly suggests a pointer to the Data Tree's root node, meaning this is tied to the B-Tree structure. |
| 108/2 | 2026/03/16 15:53 | tool | codesearch-definition-source |
0mError:requested entity does not exist Args: map[ContextFile:fs/jfs/jfs_btree.h IncludeLines:true Name:BT_GETPAGE] Results: map[SourceCode: SourceFile:] |
| 109/2 | 2026/03/16 15:53 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 94601 output: 18 thoughts: 19 Thoughts: Here's my thought process, summarized as you requested: **Navigating the JFS Source Code** Okay, so I need to find the implementation of something related to getting a page in the JFS filesystem. The obvious approach is to start with the source code and use a targeted search. I'll focus my search on the `fs/jfs/` directory, which is where the JFS filesystem code is likely located. My key search term is `BT_GETPAGE`. I expect this to lead me to the relevant function or functions responsible for handling page retrieval within the B-tree structure or whatever data structures JFS uses internally for data organization. Let's see what the search turns up. This should be a quick way to pinpoint the starting point for a deeper dive into the JFS page management routines. |
| 110/2 | 2026/03/16 15:53 | tool | grepper |
0mArgs:map[Expression:BT_GETPAGE] Results: map[Output:fs/jfs/jfs_btree.h=17=struct btpage {
--
fs/jfs/jfs_btree.h-53-/* get the page buffer and the page for specified block address */
fs/jfs/jfs_btree.h:54:#define BT_GETPAGE(IP, BN, MP, TYPE, SIZE, P, RC, ROOT)\
fs/jfs/jfs_btree.h-55-{\
--
fs/jfs/jfs_dtree.c=115=do { \
fs/jfs/jfs_dtree.c:116: BT_GETPAGE(IP, BN, MP, dtpage_t, SIZE, P, RC, i_dtroot); \
fs/jfs/jfs_dtree.c-117- if (!(RC)) { \
--
fs/jfs/jfs_xtree.c=111=static inline xtpage_t *xt_getpage(struct inode *ip, s64 bn, struct metapage **mp)
--
fs/jfs/jfs_xtree.c-115-
fs/jfs/jfs_xtree.c:116: BT_GETPAGE(ip, bn, *mp, xtpage_t, PSIZE, p, rc, i_xtroot);
fs/jfs/jfs_xtree.c-117-
] |
| 111/2 | 2026/03/16 15:53 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 95157 output: 26 thoughts: 25 Thoughts: Here's a first-person summary, tailored for an expert audience, as requested: **Diving into the JFS B-Tree Structure** Alright, time to dig into the guts of the JFS B-tree implementation. I need to refresh my memory on some specific details. Let's start by zeroing in on the definitions in `fs/jfs/jfs_btree.h`. Specifically, lines 53 through 65. That section likely covers the foundational structures for node layouts, key/data representation, or perhaps even critical constants related to the B-tree's internal parameters. I'm hoping to quickly identify the crucial elements for whatever I'm working on right now – whether it's understanding node splitting/merging behavior, key comparisons, or addressing some potential performance bottleneck. Let's see what gems are contained within. |
| 112/2 | 2026/03/16 15:54 | tool | read-file |
0mArgs:map[File:fs/jfs/jfs_btree.h] Results: map[Contents:/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) International Business Machines Corp., 2000-2004
*/
#ifndef _H_JFS_BTREE
#define _H_JFS_BTREE
/*
* jfs_btree.h: B+-tree
*
* JFS B+-tree (dtree and xtree) common definitions
*/
/*
* basic btree page - btpage
*
struct btpage {
s64 next; right sibling bn
s64 prev; left sibling bn
u8 flag;
u8 rsrvd[7]; type specific
s64 self; self address
u8 entry[4064];
}; */
/* btpaget_t flag */
#define BT_TYPE 0x07 /* B+-tree index */
#define BT_ROOT 0x01 /* root page */
#define BT_LEAF 0x02 /* leaf page */
#define BT_INTERNAL 0x04 /* internal page */
#define BT_RIGHTMOST 0x10 /* rightmost page */
#define BT_LEFTMOST 0x20 /* leftmost page */
#define BT_SWAPPED 0x80 /* used by fsck for endian swapping */
/* btorder (in inode) */
#define BT_RANDOM 0x0000
#define BT_SEQUENTIAL 0x0001
#define BT_LOOKUP 0x0010
#define BT_INSERT 0x0020
#define BT_DELETE 0x0040
/*
* btree page buffer cache access
*/
#define BT_IS_ROOT(MP) (((MP)->xflag & COMMIT_PAGE) == 0)
/* get page from buffer page */
#define BT_PAGE(IP, MP, TYPE, ROOT)\
(BT_IS_ROOT(MP) ? (TYPE *)&JFS_IP(IP)->ROOT : (TYPE *)(MP)->data)
/* get the page buffer and the page for specified block address */
#define BT_GETPAGE(IP, BN, MP, TYPE, SIZE, P, RC, ROOT)\
{\
if ((BN) == 0)\
{\
MP = (struct metapage *)&JFS_IP(IP)->bxflag;\
P = (TYPE *)&JFS_IP(IP)->ROOT;\
RC = 0;\
}\
else\
{\
MP = read_metapage((IP), BN, SIZE, 1);\
if (MP) {\
RC = 0;\
P = (MP)->data;\
} else {\
P = NULL;\
jfs_err("bread failed!");\
RC = -EIO;\
}\
}\
}
#define BT_MARK_DIRTY(MP, IP)\
{\
if (BT_IS_ROOT(MP))\
mark_inode_dirty(IP);\
else\
mark_metapage_dirty(MP);\
}
/* put the page buffer */
#define BT_PUTPAGE(MP)\
{\
if (! BT_IS_ROOT(MP)) \
release_metapage(MP); \
}
/*
* btree traversal stack
*
* record the path traversed during the search;
* top frame record the leaf page/entry selected.
*/
struct btframe { /* stack frame */
s64 bn; /* 8: */
s16 index; /* 2: */
s16 lastindex; /* 2: unused */
struct metapage *mp; /* 4/8: */
}; /* (16/24) */
struct btstack {
struct btframe *top;
int nsplit;
struct btframe stack[MAXTREEHEIGHT];
};
#define BT_CLR(btstack)\
(btstack)->top = (btstack)->stack
#define BT_STACK_FULL(btstack)\
( (btstack)->top == &((btstack)->stack[MAXTREEHEIGHT-1]))
#define BT_PUSH(BTSTACK, BN, INDEX)\
{\
assert(!BT_STACK_FULL(BTSTACK));\
(BTSTACK)->top->bn = BN;\
(BTSTACK)->top->index = INDEX;\
++(BTSTACK)->top;\
}
#define BT_POP(btstack)\
( (btstack)->top == (btstack)->stack ? NULL : --(btstack)->top )
#define BT_STACK(btstack)\
( (btstack)->top == (btstack)->stack ? NULL : (btstack)->top )
static inline void BT_STACK_DUMP(struct btstack *btstack)
{
int i;
printk("btstack dump:\n");
for (i = 0; i < MAXTREEHEIGHT; i++)
printk(KERN_ERR "bn = %Lx, index = %d\n",
(long long)btstack->stack[i].bn,
btstack->stack[i].index);
}
/* retrieve search results */
#define BT_GETSEARCH(IP, LEAF, BN, MP, TYPE, P, INDEX, ROOT)\
{\
BN = (LEAF)->bn;\
MP = (LEAF)->mp;\
if (BN)\
P = (TYPE *)MP->data;\
else\
P = (TYPE *)&JFS_IP(IP)->ROOT;\
INDEX = (LEAF)->index;\
}
/* put the page buffer of search */
#define BT_PUTSEARCH(BTSTACK)\
{\
if (! BT_IS_ROOT((BTSTACK)->top->mp))\
release_metapage((BTSTACK)->top->mp);\
}
#endif /* _H_JFS_BTREE */
] |
| 113/2 | 2026/03/16 15:54 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 96795 output: 47 thoughts: 52 Thoughts: Okay, here's my interpretation of that thought process, assuming I'm well-versed in this area: **Deconstructing a Metapage Read** Right, `BT_GETPAGE` – that's the one that calls `read_metapage((IP), BN, SIZE, 1)`. I immediately zero in on that last argument, the `1`. That's the *absolute* flag, isn't it? Meaning it's determining how the page is read into memory. So, an absolute read. Now I need to refresh my memory on the specifics of how that function operates. Time to look up the definition of `read_metapage` to understand the implications of that "absolute" parameter, and confirm my initial thought on how it is read. |
| 114/2 | 2026/03/16 15:55 | tool | codesearch-definition-source |
0mError:requested entity does not exist Args: map[ContextFile:fs/jfs/jfs_metapage.h IncludeLines:true Name:read_metapage] Results: map[SourceCode: SourceFile:] |
| 115/2 | 2026/03/16 15:55 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 97081 output: 19 thoughts: 28 Thoughts: Okay, here's my attempt at summarizing the thought process, as if I were experiencing it: **Digging into the JFS Metapage Code** Right, let's see... I need to understand how the JFS handles its metapages. That suggests a quick search should point me in the right direction. I'm going to look for the function `read_metapage`, specifically within the `fs/jfs` directory. Given the context, it's most likely defined in a header file for metapage operations, so `jfs_metapage.h` seems like the obvious candidate. A targeted search within that should get me to the core functionality I'm interested in very quickly. Let's start there. |
| 116/2 | 2026/03/16 15:56 | tool | grepper |
0mArgs:map[Expression:read_metapage] Results: map[Output:fs/jfs/jfs_btree.h=17=struct btpage {
--
fs/jfs/jfs_btree.h-63- {\
fs/jfs/jfs_btree.h:64: MP = read_metapage((IP), BN, SIZE, 1);\
fs/jfs/jfs_btree.h-65- if (MP) {\
--
fs/jfs/jfs_dmap.c=153=int dbMount(struct inode *ipbmap)
--
fs/jfs/jfs_dmap.c-168- /* read the on-disk bmap descriptor. */
fs/jfs/jfs_dmap.c:169: mp = read_metapage(ipbmap,
fs/jfs/jfs_dmap.c-170- BMAPBLKNO << JFS_SBI(ipbmap->i_sb)->l2nbperpage,
--
fs/jfs/jfs_dmap.c=276=int dbSync(struct inode *ipbmap)
--
fs/jfs/jfs_dmap.c-286- /* get the buffer for the on-disk bmap descriptor. */
fs/jfs/jfs_dmap.c:287: mp = read_metapage(ipbmap,
fs/jfs/jfs_dmap.c-288- BMAPBLKNO << JFS_SBI(ipbmap->i_sb)->l2nbperpage,
--
fs/jfs/jfs_dmap.c-290- if (mp == NULL) {
fs/jfs/jfs_dmap.c:291: jfs_err("dbSync: read_metapage failed!");
fs/jfs/jfs_dmap.c-292- return -EIO;
--
fs/jfs/jfs_dmap.c=344=int dbFree(struct inode *ip, s64 blkno, s64 nblocks)
--
fs/jfs/jfs_dmap.c-384- lblkno = BLKTODMAP(blkno, bmp->db_l2nbperpage);
fs/jfs/jfs_dmap.c:385: mp = read_metapage(ipbmap, lblkno, PSIZE, 0);
fs/jfs/jfs_dmap.c-386- if (mp == NULL) {
--
fs/jfs/jfs_dmap.c=438=dbUpdatePMap(struct inode *ipbmap,
--
fs/jfs/jfs_dmap.c-478-
fs/jfs/jfs_dmap.c:479: mp = read_metapage(bmp->db_ipbmap, lblkno, PSIZE,
fs/jfs/jfs_dmap.c-480- 0);
--
fs/jfs/jfs_dmap.c=714=int dbAlloc(struct inode *ip, s64 hint, s64 nblocks, s64 * results)
--
fs/jfs/jfs_dmap.c-795- lblkno = BLKTODMAP(blkno, bmp->db_l2nbperpage);
fs/jfs/jfs_dmap.c:796: mp = read_metapage(ipbmap, lblkno, PSIZE, 0);
fs/jfs/jfs_dmap.c-797- if (mp == NULL)
--
fs/jfs/jfs_dmap.c=972=static int dbExtend(struct inode *ip, s64 blkno, s64 nblocks, s64 addnblocks)
--
fs/jfs/jfs_dmap.c-1025- lblkno = BLKTODMAP(extblkno, bmp->db_l2nbperpage);
fs/jfs/jfs_dmap.c:1026: mp = read_metapage(ipbmap, lblkno, PSIZE, 0);
fs/jfs/jfs_dmap.c-1027- if (mp == NULL) {
--
fs/jfs/jfs_dmap.c=1312=dbAllocAG(struct bmap * bmp, int agno, s64 nblocks, int l2nb, s64 * results)
--
fs/jfs/jfs_dmap.c-1368- lblkno = BLKTOCTL(blkno, bmp->db_l2nbperpage, bmp->db_aglevel);
fs/jfs/jfs_dmap.c:1369: mp = read_metapage(bmp->db_ipbmap, lblkno, PSIZE, 0);
fs/jfs/jfs_dmap.c-1370- if (mp == NULL)
--
fs/jfs/jfs_dmap.c=1681=static int dbFindCtl(struct bmap * bmp, int l2nb, int level, s64 * blkno)
--
fs/jfs/jfs_dmap.c-1698- lblkno = BLKTOCTL(b, bmp->db_l2nbperpage, lev);
fs/jfs/jfs_dmap.c:1699: mp = read_metapage(bmp->db_ipbmap, lblkno, PSIZE, 0);
fs/jfs/jfs_dmap.c-1700- if (mp == NULL)
--
fs/jfs/jfs_dmap.c=1800=dbAllocCtl(struct bmap * bmp, s64 nblocks, int l2nb, s64 blkno, s64 * results)
--
fs/jfs/jfs_dmap.c-1812- lblkno = BLKTODMAP(blkno, bmp->db_l2nbperpage);
fs/jfs/jfs_dmap.c:1813: mp = read_metapage(bmp->db_ipbmap, lblkno, PSIZE, 0);
fs/jfs/jfs_dmap.c-1814- if (mp == NULL)
--
fs/jfs/jfs_dmap.c-1844- lblkno = BLKTODMAP(b, bmp->db_l2nbperpage);
fs/jfs/jfs_dmap.c:1845: mp = read_metapage(bmp->db_ipbmap, lblkno, PSIZE, 0);
fs/jfs/jfs_dmap.c-1846- if (mp == NULL) {
--
fs/jfs/jfs_dmap.c-1897- lblkno = BLKTODMAP(b, bmp->db_l2nbperpage);
fs/jfs/jfs_dmap.c:1898: mp = read_metapage(bmp->db_ipbmap, lblkno, PSIZE, 0);
fs/jfs/jfs_dmap.c-1899- if (mp == NULL) {
--
fs/jfs/jfs_dmap.c=2470=dbAdjCtl(struct bmap * bmp, s64 blkno, int newval, int alloc, int level)
--
fs/jfs/jfs_dmap.c-2482- lblkno = BLKTOCTL(blkno, bmp->db_l2nbperpage, level);
fs/jfs/jfs_dmap.c:2483: mp = read_metapage(bmp->db_ipbmap, lblkno, PSIZE, 0);
fs/jfs/jfs_dmap.c-2484- if (mp == NULL)
--
fs/jfs/jfs_dmap.c=3191=int dbAllocBottomUp(struct inode *ip, s64 blkno, s64 nblocks)
--
fs/jfs/jfs_dmap.c-3216- lblkno = BLKTODMAP(blkno, bmp->db_l2nbperpage);
fs/jfs/jfs_dmap.c:3217: mp = read_metapage(ipbmap, lblkno, PSIZE, 0);
fs/jfs/jfs_dmap.c-3218- if (mp == NULL) {
--
fs/jfs/jfs_dmap.c=3367=int dbExtendFS(struct inode *ipbmap, s64 blkno, s64 nblocks)
--
fs/jfs/jfs_dmap.c-3450- p = BMAPBLKNO + nbperpage; /* L2 page */
fs/jfs/jfs_dmap.c:3451: l2mp = read_metapage(ipbmap, p, PSIZE, 0);
fs/jfs/jfs_dmap.c-3452- if (!l2mp) {
--
fs/jfs/jfs_dmap.c-3469- /* read in L1 page: (blkno & (MAXL1SIZE - 1)) */
fs/jfs/jfs_dmap.c:3470: l1mp = read_metapage(ipbmap, p, PSIZE, 0);
fs/jfs/jfs_dmap.c-3471- if (l1mp == NULL)
--
fs/jfs/jfs_dmap.c-3501-
fs/jfs/jfs_dmap.c:3502: l0mp = read_metapage(ipbmap, p, PSIZE, 0);
fs/jfs/jfs_dmap.c-3503- if (l0mp == NULL)
--
fs/jfs/jfs_dmap.c-3537- /* read in dmap page: */
fs/jfs/jfs_dmap.c:3538: mp = read_metapage(ipbmap, p,
fs/jfs/jfs_dmap.c-3539- PSIZE, 0);
--
fs/jfs/jfs_dmap.c-3544- /* assign/init dmap page */
fs/jfs/jfs_dmap.c:3545: mp = read_metapage(ipbmap, p,
fs/jfs/jfs_dmap.c-3546- PSIZE, 0);
--
fs/jfs/jfs_dtree.c=196=static struct metapage *read_index_page(struct inode *inode, s64 blkno)
--
fs/jfs/jfs_dtree.c-206-
fs/jfs/jfs_dtree.c:207: return read_metapage(inode, xaddr, PSIZE, 1);
fs/jfs/jfs_dtree.c-208-}
--
fs/jfs/jfs_dtree.c=322=static u32 add_index(tid_t tid, struct inode *ip, s64 bn, int slot)
--
fs/jfs/jfs_dtree.c-454- if (!mp) {
fs/jfs/jfs_dtree.c:455: jfs_err("add_index: get/read_metapage failed!");
fs/jfs/jfs_dtree.c-456- goto clean_up;
--
fs/jfs/jfs_imap.c=94=int diMount(struct inode *ipimap)
--
fs/jfs/jfs_imap.c-110-
fs/jfs/jfs_imap.c:111: mp = read_metapage(ipimap,
fs/jfs/jfs_imap.c-112- IMAPBLKNO << JFS_SBI(ipimap->i_sb)->l2nbperpage,
--
fs/jfs/jfs_imap.c=290=int diRead(struct inode *ip)
--
fs/jfs/jfs_imap.c-366- /* read the page of disk inode */
fs/jfs/jfs_imap.c:367: mp = read_metapage(ipimap, pageno << sbi->l2nbperpage, PSIZE, 1);
fs/jfs/jfs_imap.c-368- if (!mp) {
fs/jfs/jfs_imap.c:369: jfs_err("diRead: read_metapage failed");
fs/jfs/jfs_imap.c-370- return -EIO;
--
fs/jfs/jfs_imap.c=418=struct inode *diReadSpecial(struct super_block *sb, ino_t inum, int secondary)
--
fs/jfs/jfs_imap.c-446- /* read the page of fixed disk inode (AIT) in raw mode */
fs/jfs/jfs_imap.c:447: mp = read_metapage(ip, address << sbi->l2nbperpage, PSIZE, 1);
fs/jfs/jfs_imap.c-448- if (mp == NULL) {
--
fs/jfs/jfs_imap.c=500=void diWriteSpecial(struct inode *ip, int secondary)
--
fs/jfs/jfs_imap.c-517- /* read the page of fixed disk inode (AIT) in raw mode */
fs/jfs/jfs_imap.c:518: mp = read_metapage(ip, address << sbi->l2nbperpage, PSIZE, 1);
fs/jfs/jfs_imap.c-519- if (mp == NULL) {
--
fs/jfs/jfs_imap.c=581=int diWrite(tid_t tid, struct inode *ip)
--
fs/jfs/jfs_imap.c-638- retry:
fs/jfs/jfs_imap.c:639: mp = read_metapage(ipimap, pageno << sbi->l2nbperpage, PSIZE, 1);
fs/jfs/jfs_imap.c-640- if (!mp)
--
fs/jfs/jfs_imap.c=2663=static int diIAGRead(struct inomap * imap, int iagno, struct metapage ** mpp)
--
fs/jfs/jfs_imap.c-2671- /* read the iag. */
fs/jfs/jfs_imap.c:2672: *mpp = read_metapage(ipimap, blkno, PSIZE, 0);
fs/jfs/jfs_imap.c-2673- if (*mpp == NULL) {
--
fs/jfs/jfs_metapage.h=50=extern struct metapage *__get_metapage(struct inode *inode,
--
fs/jfs/jfs_metapage.h-53-
fs/jfs/jfs_metapage.h:54:#define read_metapage(inode, lblock, size, absolute)\
fs/jfs/jfs_metapage.h-55- __get_metapage(inode, lblock, size, absolute, false)
--
fs/jfs/xattr.c=351=static int ea_read(struct inode *ip, struct jfs_ea_list *ealist)
--
fs/jfs/xattr.c-395-
fs/jfs/xattr.c:396: if (!(mp = read_metapage(ip, blkno + i, bytes_to_read, 1)))
fs/jfs/xattr.c-397- return -EIO;
--
fs/jfs/xattr.c=426=static int ea_get(struct inode *inode, struct ea_buffer *ea_buf, int min_size)
--
fs/jfs/xattr.c-548- ea_buf->flag = EA_EXTENT;
fs/jfs/xattr.c:549: ea_buf->mp = read_metapage(inode, addressDXD(&ji->ea),
fs/jfs/xattr.c-550- lengthDXD(&ji->ea) << sb->s_blocksize_bits,
] |
| 117/2 | 2026/03/16 15:56 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 101454 output: 48 thoughts: 146 Thoughts: **Analyzing the `read_metapage` Macro and its Implications** Okay, so looking at this code, `read_metapage` is clearly defined as a macro. That's good to know upfront. And it's calling `__get_metapage` with some arguments. Specifically, I see `BT_GETPAGE` using it like this: `read_metapage((IP), BN, SIZE, 1)`. That last argument, the `1`, that's interesting. It looks like it corresponds to the `absolute` argument in the macro's definition. So, `absolute` is `1`, meaning that when `__get_metapage` is called this way, it's operating in an absolute addressing mode. Okay, now let's think about that. If `absolute` is `1`, `__get_metapage` appears to be using `JFS_SBI(inode->i_sb)->direct_inode->i_mapping`. That's important; it's using the direct inode's mapping. I need to make sure I understand the implications of using that mapping. Let me quickly review `__get_metapage` again and make sure I fully understand that usage in this context. |
| 118/2 | 2026/03/16 15:56 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:__get_metapage] Results: map[SourceCode: 683: struct metapage *__get_metapage(struct inode *inode, unsigned long lblock,
684: unsigned int size, int absolute,
685: unsigned long new)
686: {
687: int l2BlocksPerPage;
688: int l2bsize;
689: struct address_space *mapping;
690: struct metapage *mp = NULL;
691: struct folio *folio;
692: unsigned long page_index;
693: unsigned long page_offset;
694:
695: jfs_info("__get_metapage: ino = %ld, lblock = 0x%lx, abs=%d",
696: inode->i_ino, lblock, absolute);
697:
698: l2bsize = inode->i_blkbits;
699: l2BlocksPerPage = PAGE_SHIFT - l2bsize;
700: page_index = lblock >> l2BlocksPerPage;
701: page_offset = (lblock - (page_index << l2BlocksPerPage)) << l2bsize;
702: if ((page_offset + size) > PAGE_SIZE) {
703: jfs_err("MetaData crosses page boundary!!");
704: jfs_err("lblock = %lx, size = %d", lblock, size);
705: dump_stack();
706: return NULL;
707: }
708: if (absolute)
709: mapping = JFS_SBI(inode->i_sb)->direct_inode->i_mapping;
710: else {
711: /*
712: * If an nfs client tries to read an inode that is larger
713: * than any existing inodes, we may try to read past the
714: * end of the inode map
715: */
716: if ((lblock << inode->i_blkbits) >= inode->i_size)
717: return NULL;
718: mapping = inode->i_mapping;
719: }
720:
721: if (new && (PSIZE == PAGE_SIZE)) {
722: folio = filemap_grab_folio(mapping, page_index);
723: if (IS_ERR(folio)) {
724: jfs_err("filemap_grab_folio failed!");
725: return NULL;
726: }
727: folio_mark_uptodate(folio);
728: } else {
729: folio = read_mapping_folio(mapping, page_index, NULL);
730: if (IS_ERR(folio)) {
731: jfs_err("read_mapping_page failed!");
732: return NULL;
733: }
734: folio_lock(folio);
735: }
736:
737: mp = folio_to_mp(folio, page_offset);
738: if (mp) {
739: if (mp->logical_size != size) {
740: jfs_error(inode->i_sb,
741: "get_mp->logical_size != size\n");
742: jfs_err("logical_size = %d, size = %d",
743: mp->logical_size, size);
744: dump_stack();
745: goto unlock;
746: }
747: mp->count++;
748: lock_metapage(mp);
749: if (test_bit(META_discard, &mp->flag)) {
750: if (!new) {
751: jfs_error(inode->i_sb,
752: "using a discarded metapage\n");
753: discard_metapage(mp);
754: goto unlock;
755: }
756: clear_bit(META_discard, &mp->flag);
757: }
758: } else {
759: INCREMENT(mpStat.pagealloc);
760: mp = alloc_metapage(GFP_NOFS);
761: if (!mp)
762: goto unlock;
763: mp->folio = folio;
764: mp->sb = inode->i_sb;
765: mp->flag = 0;
766: mp->xflag = COMMIT_PAGE;
767: mp->count = 1;
768: mp->nohomeok = 0;
769: mp->logical_size = size;
770: mp->data = folio_address(folio) + page_offset;
771: mp->index = lblock;
772: if (unlikely(insert_metapage(folio, mp))) {
773: free_metapage(mp);
774: goto unlock;
775: }
776: lock_metapage(mp);
777: }
778:
779: if (new) {
780: jfs_info("zeroing mp = 0x%p", mp);
781: memset(mp->data, 0, PSIZE);
782: }
783:
784: folio_unlock(folio);
785: jfs_info("__get_metapage: returning = 0x%p data = 0x%p", mp, mp->data);
786: return mp;
787:
788: unlock:
789: folio_unlock(folio);
790: return NULL;
791: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 119/2 | 2026/03/16 15:56 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 103558 output: 43 thoughts: 164 Thoughts: **Analyzing JFS Direct I/O and Folio Mapping** Okay, let's break this down. My understanding is that when the `absolute` flag is set to 1, the code uses `JFS_SBI(inode->i_sb)->direct_inode->i_mapping`. In effect, `folio->mapping` becomes `sbi->direct_inode->i_mapping`. So far, so good. But here's where it gets interesting. If `folio->mapping` ends up being NULL, that signifies the folio was removed from `sbi->direct_inode->i_mapping`. Now, the key question is: under what circumstances does this removal happen? It seems the only place I can see this happening is during the execution of `truncate_inode_pages(sbi->direct_inode->i_mapping, 0)`. That function is the gatekeeper for removing folios from that particular mapping. And where does that function get called? Looking deeper, it's called within `jfs_put_super`. This means the clearing of the `direct_inode` pages is part of the overall filesystem shutdown/unmount process. This is the crux of the issue. |
| 120/2 | 2026/03/16 15:57 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/super.c IncludeLines:true Name:jfs_put_super] Results: map[SourceCode: 185: static void jfs_put_super(struct super_block *sb)
186: {
187: struct jfs_sb_info *sbi = JFS_SBI(sb);
188: int rc;
189:
190: jfs_info("In jfs_put_super");
191:
192: jfs_quota_off_umount(sb);
193:
194: rc = jfs_umount(sb);
195: if (rc)
196: jfs_err("jfs_umount failed with return code %d", rc);
197:
198: unload_nls(sbi->nls_tab);
199:
200: truncate_inode_pages(sbi->direct_inode->i_mapping, 0);
201: iput(sbi->direct_inode);
202:
203: kfree(sbi);
204: }
SourceFile:fs/jfs/super.c] |
| 121/2 | 2026/03/16 15:57 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 104337 output: 42 thoughts: 284 Thoughts: **My JFS Superblock Unmounting Concerns** Okay, so I'm digging into this JFS code, and something's bugging me about the unmount process. It looks like when `jfs_put_super` is called, it triggers `truncate_inode_pages` on the `direct_inode`'s i_mapping, effectively nuking all the folios from the page cache. But hold on... what if `jfs_lazycommit` is still churning away, processing transactions for this very superblock? `jfs_lazycommit`, as I recall, is a global kthread, so there's a potential race there. Does `jfs_put_super` somehow wait for `jfs_lazycommit` to complete its work before wiping the page cache? The unmount flow includes a call to `jfs_umount`, which then calls `jfs_flush_journal(log, 2)`. This `jfs_flush_journal` function, from what I know, is supposed to wait for the `log->cqueue` and `log->synclist` to drain. But *wait*, what if `log` is `NULL`? I need to consider that edge case. If the filesystem is mounted read-only, the `log` will be `NULL`. If `log` is `NULL`, `jfs_flush_journal` just returns immediately. That seems… potentially problematic. Now, if the filesystem is read-only, there shouldn't be any transactions, right? That’s what should be happening, but what if the filesystem was mounted read-write, and then remounted read-only? This can happen. The remount process calls `jfs_remount`. I need to understand what happens during remount. I need to go and examine `jfs_remount` closely. There's a potential window of vulnerability here. |
| 122/2 | 2026/03/16 15:58 | tool | codesearch-definition-source |
0mError:requested entity does not exist Args: map[ContextFile:fs/jfs/super.c IncludeLines:true Name:jfs_remount] Results: map[SourceCode: SourceFile:] |
| 123/2 | 2026/03/16 15:58 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 105089 output: 16 thoughts: 20 Thoughts: Okay, here's my interpretation of that thought process, assuming I'm deep in the weeds of filesystem implementation: **Diving into the JFS Superblock Code** Alright, I need to get to the bottom of this `remount` functionality, and since we're talking about a JFS filesystem, the superblock is always a good starting point. I need to figure out what's happening when a filesystem gets remounted. Specifically, I need to understand how the remount operation interacts with the superblock itself. So, I'll go straight into `fs/jfs/super.c`. That's where the core JFS superblock code will live, and it's highly likely that the `remount` process, at least at some fundamental level, is managed there, since it probably involves changes to the core mount parameters or flags. Let's see what we can find. I'll search for "remount" within that file and see what functions or data structures come up. Hopefully, that will get me closer to understanding how the kernel handles the changes. |
| 124/2 | 2026/03/16 15:59 | tool | grepper |
0mArgs:map[Expression:remount] Results: map[Output:
Full output is too long, showing 500 out of 2652 lines.
Use more precise expression if possible.
[Documentation/ABI/testing/sysfs-fs-ubifs=5=Description:
--
Documentation/ABI/testing/sysfs-fs-ubifs-10-
Documentation/ABI/testing/sysfs-fs-ubifs:11: The counter is reset to 0 with a remount.
Documentation/ABI/testing/sysfs-fs-ubifs-12-
--
Documentation/ABI/testing/sysfs-fs-ubifs=17=Description:
--
Documentation/ABI/testing/sysfs-fs-ubifs-22-
Documentation/ABI/testing/sysfs-fs-ubifs:23: The counter is reset to 0 with a remount.
Documentation/ABI/testing/sysfs-fs-ubifs-24-
--
Documentation/ABI/testing/sysfs-fs-ubifs=29=Description:
--
Documentation/ABI/testing/sysfs-fs-ubifs-34-
Documentation/ABI/testing/sysfs-fs-ubifs:35: The counter is reset to 0 with a remount.
--
Documentation/admin-guide/cgroup-v1/cgroups.rst=375=subsystems, type::
--
Documentation/admin-guide/cgroup-v1/cgroups.rst-378-
Documentation/admin-guide/cgroup-v1/cgroups.rst:379:While remounting cgroups is currently supported, it is not recommend
Documentation/admin-guide/cgroup-v1/cgroups.rst-380-to use it. Remounting allows changing bound subsystems and
--
Documentation/admin-guide/cgroup-v1/cgroups.rst=382=hierarchy is empty and release_agent itself should be replaced with
Documentation/admin-guide/cgroup-v1/cgroups.rst:383:conventional fsnotify. The support for remounting will be removed in
Documentation/admin-guide/cgroup-v1/cgroups.rst-384-the future.
--
Documentation/admin-guide/cgroup-v1/cgroups.rst=402=If you want to change the value of release_agent::
--
Documentation/admin-guide/cgroup-v1/cgroups.rst-405-
Documentation/admin-guide/cgroup-v1/cgroups.rst:406:It can also be changed via remount.
Documentation/admin-guide/cgroup-v1/cgroups.rst-407-
--
Documentation/admin-guide/cgroup-v2.rst=185=cgroup v2 currently supports the following mount options.
--
Documentation/admin-guide/cgroup-v2.rst-189- option is system wide and can only be set on mount or modified
Documentation/admin-guide/cgroup-v2.rst:190: through remount from the init namespace. The mount option is
Documentation/admin-guide/cgroup-v2.rst-191- ignored on non-init namespace mounts. Please refer to the
--
Documentation/admin-guide/cgroup-v2.rst-206- This option is system wide and can only be set on mount or
Documentation/admin-guide/cgroup-v2.rst:207: modified through remount from the init namespace. The mount
Documentation/admin-guide/cgroup-v2.rst-208- option is ignored on non-init namespace mounts.
--
Documentation/admin-guide/cgroup-v2.rst-246- will not be tracked by the memory controller (even if cgroup
Documentation/admin-guide/cgroup-v2.rst:247: v2 is remounted later on).
Documentation/admin-guide/cgroup-v2.rst-248-
--
Documentation/admin-guide/cifs/usage.rst=278=A partial list of the supported mount options follows:
--
Documentation/admin-guide/cifs/usage.rst-495- that the uid for the file can change when the inode is
Documentation/admin-guide/cifs/usage.rst:496: reloaded (or the user remounts the share).
Documentation/admin-guide/cifs/usage.rst-497- nosetuids
--
Documentation/admin-guide/cifs/usage.rst-615- versions 3.0.24 and 3.0.25.
Documentation/admin-guide/cifs/usage.rst:616: remount
Documentation/admin-guide/cifs/usage.rst:617: remount the share (often used to change from ro to rw mounts
Documentation/admin-guide/cifs/usage.rst-618- or vice versa)
--
Documentation/admin-guide/ext4.rst=141=When mounting an ext4 filesystem, the following option are accepted:
--
Documentation/admin-guide/ext4.rst-225- Simulate the effects of calling ext4_abort() for debugging purposes.
Documentation/admin-guide/ext4.rst:226: This is normally used while remounting a filesystem which is already
Documentation/admin-guide/ext4.rst-227- mounted.
Documentation/admin-guide/ext4.rst-228-
Documentation/admin-guide/ext4.rst:229: errors=remount-ro
Documentation/admin-guide/ext4.rst-230- Remount the filesystem read-only on an error.
--
Documentation/admin-guide/jfs.rst=18=resize=value
--
Documentation/admin-guide/jfs.rst-20- growing a volume, not shrinking it. This option is only
Documentation/admin-guide/jfs.rst:21: valid during a remount, when the volume is mounted
Documentation/admin-guide/jfs.rst-22- read-write. The resize keyword with no value will grow
--
Documentation/admin-guide/jfs.rst=31=integrity(*)
Documentation/admin-guide/jfs.rst-32- Commit metadata changes to the journal. Use this option to
Documentation/admin-guide/jfs.rst:33: remount a volume where the nointegrity option was
Documentation/admin-guide/jfs.rst-34- previously specified in order to restore normal behavior.
--
Documentation/admin-guide/jfs.rst=36=errors=continue
Documentation/admin-guide/jfs.rst-37- Keep going on a filesystem error.
Documentation/admin-guide/jfs.rst:38:errors=remount-ro(*)
Documentation/admin-guide/jfs.rst-39- Remount the filesystem read-only on an error.
--
Documentation/admin-guide/mm/transhuge.rst=448=using the kernel parameter ``transparent_hugepage_tmpfs=<policy>``.
Documentation/admin-guide/mm/transhuge.rst-449-
Documentation/admin-guide/mm/transhuge.rst:450:``mount -o remount,huge= /mountpoint`` works fine after mount: remounting
Documentation/admin-guide/mm/transhuge.rst-451-``huge=never`` will not attempt to break up huge pages at all, just stop more
--
Documentation/admin-guide/sysrq.rst=20=to 1. Here is the list of possible values in /proc/sys/kernel/sysrq:
--
Documentation/admin-guide/sysrq.rst-30- 16 = 0x10 - enable sync command
Documentation/admin-guide/sysrq.rst:31: 32 = 0x20 - enable remount read-only
Documentation/admin-guide/sysrq.rst-32- 64 = 0x40 - enable signalling of processes (term, kill, oom-kill)
--
Documentation/admin-guide/sysrq.rst=97=Command Function
--
Documentation/admin-guide/sysrq.rst-144-
Documentation/admin-guide/sysrq.rst:145:``u`` Will attempt to remount all mounted filesystems read-only.
Documentation/admin-guide/sysrq.rst-146-
--
Documentation/admin-guide/sysrq.rst=198=the "OK" and "Done" appear on the screen.
--
Documentation/admin-guide/sysrq.rst-200-``umount(u)`` can be used to mark filesystems as properly unmounted. From the
Documentation/admin-guide/sysrq.rst:201:running system's point of view, they will be remounted read-only. The remount
Documentation/admin-guide/sysrq.rst-202-isn't complete until you see the "OK" and "Done" message appear on the screen.
--
Documentation/admin-guide/xfs.rst=22=When mounting an XFS filesystem, the following options are accepted.
--
Documentation/admin-guide/xfs.rst-220- default frequency. Can be specified multiple times to enable multiple
Documentation/admin-guide/xfs.rst:221: errortags. Specifying this option on remount will reset the error tag
Documentation/admin-guide/xfs.rst-222- to the default value if it was set to any other value before.
--
Documentation/filesystems/autofs-mount-control.rst=30=The actual problem with autofs is that it can't reconnect to existing
Documentation/filesystems/autofs-mount-control.rst:31:mounts. Immediately one thinks of just adding the ability to remount
Documentation/filesystems/autofs-mount-control.rst-32-autofs file systems would solve it, but alas, that can't work. This is
--
Documentation/filesystems/dax.rst=40=Summary
--
Documentation/filesystems/dax.rst-78- in-core inode state (`S_DAX`) will be overridden until the filesystem is
Documentation/filesystems/dax.rst:79: remounted with dax=inode and the inode is evicted from kernel memory.
Documentation/filesystems/dax.rst-80-
--
Documentation/filesystems/ext2.rst=33=errors=continue Keep going on a filesystem error.
Documentation/filesystems/ext2.rst:34:errors=remount-ro Remount the filesystem read-only on an error.
Documentation/filesystems/ext2.rst-35-errors=panic Panic and halt the machine if an error occurs.
--
Documentation/filesystems/ext4/mmp.rst=22=MMP sequence number, then another node (node B) has mounted the
Documentation/filesystems/ext4/mmp.rst:23:filesystem, and node A remounts the filesystem read-only. If the
Documentation/filesystems/ext4/mmp.rst-24-sequence numbers match, the sequence number is incremented both in
--
Documentation/filesystems/f2fs.rst=370=errors=%s Specify f2fs behavior on critical errors. This supports modes:
Documentation/filesystems/f2fs.rst:371: "panic", "continue" and "remount-ro", respectively, trigger
Documentation/filesystems/f2fs.rst:372: panic immediately, continue without doing anything, and remount
Documentation/filesystems/f2fs.rst-373- the partition in read-only mode. By default it uses "continue"
--
Documentation/filesystems/f2fs.rst-378- ====================== =============== =============== ========
Documentation/filesystems/f2fs.rst:379: mode continue remount-ro panic
Documentation/filesystems/f2fs.rst-380- ====================== =============== =============== ========
--
Documentation/filesystems/gfs2/uevents.rst=25=of the filesystem respectively.
--
Documentation/filesystems/gfs2/uevents.rst-29-
Documentation/filesystems/gfs2/uevents.rst:30:The ONLINE uevent is generated after a successful mount or remount. It
Documentation/filesystems/gfs2/uevents.rst-31-has the same environment variables as the ADD uevent. The ONLINE
--
Documentation/filesystems/gfs2/uevents.rst=57=cluster. For this reason the ONLINE uevent was used when adding a new
Documentation/filesystems/gfs2/uevents.rst:58:uevent for a successful mount or remount.
Documentation/filesystems/gfs2/uevents.rst-59-
--
Documentation/filesystems/hpfs.rst=40=check=none,normal,strict (default normal)
--
Documentation/filesystems/hpfs.rst-45- bitmaps when accessing it).
Documentation/filesystems/hpfs.rst:46:errors=continue,remount-ro,panic (default remount-ro)
Documentation/filesystems/hpfs.rst-47- Behaviour when filesystem errors found.
--
Documentation/filesystems/hpfs.rst=150=and I'll make it. However, this driver should detect presence of HPFS386
Documentation/filesystems/hpfs.rst:151:structures, remount read-only and not destroy them (I hope).
Documentation/filesystems/hpfs.rst-152-
--
Documentation/filesystems/nilfs2.rst=58=errors=continue Keep going on a filesystem error.
Documentation/filesystems/nilfs2.rst:59:errors=remount-ro(*) Remount the filesystem read-only on an error.
Documentation/filesystems/nilfs2.rst-60-errors=panic Panic and halt the machine if an error occurs.
--
Documentation/filesystems/ocfs2.rst=52=barrier=1 This enables/disables barriers. barrier=0 disables it,
Documentation/filesystems/ocfs2.rst-53- barrier=1 enables it.
Documentation/filesystems/ocfs2.rst:54:errors=remount-ro(*) Remount the filesystem read-only on an error.
Documentation/filesystems/ocfs2.rst-55-errors=panic Panic and halt the machine if an error occurs.
--
Documentation/filesystems/overlayfs.rst=492=that was in the lower at the time of the copy-up. If at any time
Documentation/filesystems/overlayfs.rst:493:(during a mount, after a remount, etc) such a file in the lower is
Documentation/filesystems/overlayfs.rst-494-replaced or modified in any way, access to the corresponding file in
--
Documentation/filesystems/proc.rst=2421=mountpoints within the same namespace::
--
Documentation/filesystems/proc.rst-2433-
Documentation/filesystems/proc.rst:2434:and only after remounting procfs mount options will change at all
Documentation/filesystems/proc.rst-2435-mountpoints::
Documentation/filesystems/proc.rst-2436-
Documentation/filesystems/proc.rst:2437: # mount -o remount,hidepid=1 -t proc proc /tmp/proc
Documentation/filesystems/proc.rst-2438-
--
Documentation/filesystems/tmpfs.rst=20=explained further below, some of which can be reconfigured dynamically on the
Documentation/filesystems/tmpfs.rst:21:fly using a remount ('mount -o remount ...') of the filesystem. A tmpfs
Documentation/filesystems/tmpfs.rst-22-filesystem can be resized but it cannot be resized to a size below its current
--
Documentation/filesystems/tmpfs.rst=89=These parameters accept a suffix k, m or g for kilo, mega and giga and
Documentation/filesystems/tmpfs.rst:90:can be changed on remount. The size parameter also accepts a suffix %
Documentation/filesystems/tmpfs.rst-91-to limit this tmpfs instance to that percentage of your physical RAM:
--
Documentation/filesystems/tmpfs.rst=142=grpquota_inode_hardlimit Set global group quota inode hard limit.
--
Documentation/filesystems/tmpfs.rst-144-
Documentation/filesystems/tmpfs.rst:145:None of the quota related mount options can be set or changed on remount.
Documentation/filesystems/tmpfs.rst-146-
Documentation/filesystems/tmpfs.rst=147=Quota limit parameters accept a suffix k, m or g for kilo, mega and giga
Documentation/filesystems/tmpfs.rst:148:and can't be changed on remount. Default global quota limits are taking
Documentation/filesystems/tmpfs.rst-149-effect for any and all user/group/project except root the first time the
--
Documentation/filesystems/tmpfs.rst=161=all files in that instance (if CONFIG_NUMA is enabled) - which can be
Documentation/filesystems/tmpfs.rst:162:adjusted on the fly via 'mount -o remount ...'
Documentation/filesystems/tmpfs.rst-163-
--
Documentation/filesystems/tmpfs.rst=207=mount options. It can be added later, when the tmpfs is already mounted
Documentation/filesystems/tmpfs.rst:208:on MountPoint, by 'mount -o remount,mpol=Policy:NodeList MountPoint'.
Documentation/filesystems/tmpfs.rst-209-
--
Documentation/filesystems/tmpfs.rst=217=gid The group id
--
Documentation/filesystems/tmpfs.rst-219-
Documentation/filesystems/tmpfs.rst:220:These options do not have any effect on remount. You can change these
Documentation/filesystems/tmpfs.rst-221-parameters with chmod(1), chown(1) and chgrp(1) on a mounted filesystem.
--
Documentation/filesystems/vfat.rst=16=VFAT MOUNT OPTIONS
--
Documentation/filesystems/vfat.rst-183-
Documentation/filesystems/vfat.rst:184:**errors=panic|continue|remount-ro**
Documentation/filesystems/vfat.rst-185- specify FAT behavior on critical errors: panic, continue
Documentation/filesystems/vfat.rst:186: without doing anything or remount the partition in
Documentation/filesystems/vfat.rst-187- read-only mode (default behavior).
--
Documentation/filesystems/vfs.rst=1460=Parsing options
--
Documentation/filesystems/vfs.rst-1462-
Documentation/filesystems/vfs.rst:1463:On mount and remount the filesystem is passed a string containing a
Documentation/filesystems/vfs.rst-1464-comma separated list of mount options. The options can have either of
--
Documentation/filesystems/virtiofs.rst=42=Mount options
--
Documentation/filesystems/virtiofs.rst-44-
Documentation/filesystems/virtiofs.rst:45:virtiofs supports general VFS mount options, for example, remount,
Documentation/filesystems/virtiofs.rst-46-ro, rw, context, etc. It also supports FUSE mount options.
--
Documentation/filesystems/xfs/xfs-online-fsck-design.rst=778=the following ways:
--
Documentation/filesystems/xfs/xfs-online-fsck-design.rst-790-* Race ``xfs_scrub`` in check and force-repair mode against ``fsstress`` while
Documentation/filesystems/xfs/xfs-online-fsck-design.rst:791: remounting the filesystem read-only and read-write.
Documentation/filesystems/xfs/xfs-online-fsck-design.rst-792-* The same, but running ``fsx`` instead of ``fsstress``. (Not done yet?)
--
Documentation/filesystems/zonefs.rst=258=conditions::
--
Documentation/filesystems/zonefs.rst-266- | | good | fixed yes no yes yes |
Documentation/filesystems/zonefs.rst:267: | remount-ro | read-only | as is yes no yes no |
Documentation/filesystems/zonefs.rst-268- | (default) | offline | 0 no no no no |
--
Documentation/filesystems/zonefs.rst=283=Further notes:
Documentation/filesystems/zonefs.rst-284-
Documentation/filesystems/zonefs.rst:285:* The "errors=remount-ro" mount option is the default behavior of zonefs I/O
Documentation/filesystems/zonefs.rst-286- error processing if no errors mount option is specified.
Documentation/filesystems/zonefs.rst:287:* With the "errors=remount-ro" mount option, the change of the file access
Documentation/filesystems/zonefs.rst:288: permissions to read-only applies to all files. The file system is remounted
Documentation/filesystems/zonefs.rst-289- read-only.
--
Documentation/filesystems/zonefs.rst-296- the device will not re-enable file write access.
Documentation/filesystems/zonefs.rst:297:* File access permission changes implied by the remount-ro, zone-ro and
Documentation/filesystems/zonefs.rst-298- zone-offline mount options are temporary for zones in a good condition.
Documentation/filesystems/zonefs.rst:299: Unmounting and remounting the file system will restore the previous default
Documentation/filesystems/zonefs.rst-300- (format time values) access rights to the files affected.
--
Documentation/filesystems/zonefs.rst=318=condition changes. The defined behaviors are as follow:
Documentation/filesystems/zonefs.rst-319-
Documentation/filesystems/zonefs.rst:320:* remount-ro (default)
Documentation/filesystems/zonefs.rst-321-* zone-ro
--
Documentation/power/swsusp.rst=66=before suspend (it is limited to around 2/5 of available RAM by default).
--
Documentation/power/swsusp.rst-79- the init script by using the /sys/power/resume file. It is vital
Documentation/power/swsusp.rst:80: that this be done prior to remounting any filesystems (even as
Documentation/power/swsusp.rst-81- read-only) otherwise data may be corrupted.
--
Documentation/power/swsusp.rst=446=A:
--
Documentation/power/swsusp.rst-466- Firewire, CompactFlash, MMC, external SATA, or even IDE hotplug bays)
Documentation/power/swsusp.rst:467: before suspending; then remount them after resuming.
Documentation/power/swsusp.rst-468-
--
Documentation/translations/zh_CN/filesystems/tmpfs.rst=18=tmpfs将所有文件保存在内核缓存中,随着文件内容增长或缩小可以将不需要的
Documentation/translations/zh_CN/filesystems/tmpfs.rst:19:页面swap出去。它具有最大限制,可以通过“mount -o remount ...”调整。
Documentation/translations/zh_CN/filesystems/tmpfs.rst-20-
--
Documentation/translations/zh_CN/filesystems/tmpfs.rst=59=nr_inodes tmpfs实例的最大inode个数。默认值是物理内存页数的一半,或者
--
Documentation/translations/zh_CN/filesystems/tmpfs.rst-62-
Documentation/translations/zh_CN/filesystems/tmpfs.rst:63:这些参数接受后缀k,m或g表示千,兆和千兆字节,可以在remount时更改。
Documentation/translations/zh_CN/filesystems/tmpfs.rst-64-size参数也接受后缀%用来限制tmpfs实例占用物理RAM的百分比:
--
Documentation/translations/zh_CN/filesystems/tmpfs.rst=72=tmpfs具有为所有文件设置NUMA内存分配策略挂载选项(如果启用了CONFIG_NUMA),
Documentation/translations/zh_CN/filesystems/tmpfs.rst:73:可以通过“mount -o remount ...”调整
Documentation/translations/zh_CN/filesystems/tmpfs.rst-74-
--
Documentation/translations/zh_CN/filesystems/tmpfs.rst=108=revocery内核),或者具有较少的节点在线,建议从自动模式中省略mpol选项挂载选项。
Documentation/translations/zh_CN/filesystems/tmpfs.rst:109:可以在以后通过“mount -o remount,mpol=Policy:NodeList MountPoint”添加到挂载点。
Documentation/translations/zh_CN/filesystems/tmpfs.rst-110-
--
Documentation/translations/zh_CN/filesystems/tmpfs.rst=116=gid 组ID
--
Documentation/translations/zh_CN/filesystems/tmpfs.rst-118-
Documentation/translations/zh_CN/filesystems/tmpfs.rst:119:这些选项对remount没有任何影响。您可以通过chmod(1),chown(1)和chgrp(1)的更改
Documentation/translations/zh_CN/filesystems/tmpfs.rst-120-已经挂载的参数。
--
Documentation/translations/zh_TW/filesystems/tmpfs.rst=18=tmpfs將所有文件保存在內核緩存中,隨着文件內容增長或縮小可以將不需要的
Documentation/translations/zh_TW/filesystems/tmpfs.rst:19:頁面swap出去。它具有最大限制,可以通過“mount -o remount ...”調整。
Documentation/translations/zh_TW/filesystems/tmpfs.rst-20-
--
Documentation/translations/zh_TW/filesystems/tmpfs.rst=59=nr_inodes tmpfs實例的最大inode個數。默認值是物理內存頁數的一半,或者
--
Documentation/translations/zh_TW/filesystems/tmpfs.rst-62-
Documentation/translations/zh_TW/filesystems/tmpfs.rst:63:這些參數接受後綴k,m或g表示千,兆和千兆字節,可以在remount時更改。
Documentation/translations/zh_TW/filesystems/tmpfs.rst-64-size參數也接受後綴%用來限制tmpfs實例佔用物理RAM的百分比:
--
Documentation/translations/zh_TW/filesystems/tmpfs.rst=72=tmpfs具有爲所有文件設置NUMA內存分配策略掛載選項(如果啓用了CONFIG_NUMA),
Documentation/translations/zh_TW/filesystems/tmpfs.rst:73:可以通過“mount -o remount ...”調整
Documentation/translations/zh_TW/filesystems/tmpfs.rst-74-
--
Documentation/translations/zh_TW/filesystems/tmpfs.rst=108=revocery內核),或者具有較少的節點在線,建議從自動模式中省略mpol選項掛載選項。
Documentation/translations/zh_TW/filesystems/tmpfs.rst:109:可以在以後通過“mount -o remount,mpol=Policy:NodeList MountPoint”添加到掛載點。
Documentation/translations/zh_TW/filesystems/tmpfs.rst-110-
--
Documentation/translations/zh_TW/filesystems/tmpfs.rst=116=gid 組ID
--
Documentation/translations/zh_TW/filesystems/tmpfs.rst-118-
Documentation/translations/zh_TW/filesystems/tmpfs.rst:119:這些選項對remount沒有任何影響。您可以通過chmod(1),chown(1)和chgrp(1)的更改
Documentation/translations/zh_TW/filesystems/tmpfs.rst-120-已經掛載的參數。
--
Documentation/virt/uml/user_mode_linux_howto_v2.rst=142=will be empty and it needs an entry for the root file system::
Documentation/virt/uml/user_mode_linux_howto_v2.rst-143-
Documentation/virt/uml/user_mode_linux_howto_v2.rst:144: /dev/ubd0 ext4 discard,errors=remount-ro 0 1
Documentation/virt/uml/user_mode_linux_howto_v2.rst-145-
--
drivers/android/binder/rust_binderfs.c=294=static int binderfs_fs_context_reconfigure(struct fs_context *fc)
--
drivers/android/binder/rust_binderfs.c-299- if (info->mount_opts.stats_mode != ctx->stats_mode)
drivers/android/binder/rust_binderfs.c:300: return invalfc(fc, "Binderfs stats mode cannot be changed during a remount");
drivers/android/binder/rust_binderfs.c-301-
--
drivers/android/binderfs.c=303=static int binderfs_fs_context_reconfigure(struct fs_context *fc)
--
drivers/android/binderfs.c-308- if (info->mount_opts.stats_mode != ctx->stats_mode)
drivers/android/binderfs.c:309: return invalfc(fc, "Binderfs stats mode cannot be changed during a remount");
drivers/android/binderfs.c-310-
--
drivers/scsi/imm.h-31- * Ten minutes later after a few fixes.... (LITERALLY!!!)
drivers/scsi/imm.h:32: * Have mounted disk, copied file, dismounted disk, remount disk, diff file
drivers/scsi/imm.h-33- * ----- It actually works!!! -----
--
drivers/tty/sysrq.c=200=static void sysrq_handle_mountro(u8 key)
drivers/tty/sysrq.c-201-{
drivers/tty/sysrq.c:202: emergency_remount();
drivers/tty/sysrq.c-203-}
--
drivers/usb/storage/unusual_devs.h=411=UNUSUAL_DEV( 0x04b8, 0x0602, 0x0110, 0x0110,
--
drivers/usb/storage/unusual_devs.h-417- * Reported by James Buren <braewoods+lkml@braewoods.net>
drivers/usb/storage/unusual_devs.h:418: * Virtual ISOs cannot be remounted if ejected while the device is locked
drivers/usb/storage/unusual_devs.h-419- * Disable locking to mimic Windows behavior that bypasses the issue
--
fs/affs/Changes=183=Version 3.6
--
fs/affs/Changes-221-
fs/affs/Changes:222:- Added a remount function, so attempts to remount
fs/affs/Changes-223- a dircache filesystem or one with errors read/write
fs/affs/Changes:224: can be trapped. Previously, ro remounts didn't
fs/affs/Changes:225: flush the super block, and rw remounts didn't
fs/affs/Changes-226- create allocation zones ...
--
fs/affs/Changes-234- bitmap pointers read only, also allows them
fs/affs/Changes:235: to be remounted read/write.
fs/affs/Changes-236-
--
fs/affs/Changes=276=Version 3.3
--
fs/affs/Changes-285- is no pointer to the super block available.)
fs/affs/Changes:286:- The filesystem is remounted read-only after an
fs/affs/Changes-287- error.
--
fs/affs/super.c=306=static int affs_fill_super(struct super_block *sb, struct fs_context *fc)
--
fs/affs/super.c-317- int i, j;
fs/affs/super.c:318: int tmp_flags; /* fix remount prototype... */
fs/affs/super.c-319- u8 sig[4];
--
fs/affs/super.c=518=static int affs_reconfigure(struct fs_context *fc)
--
fs/affs/super.c-531- * NB: Historically, only mount_flags, mode, uid, gic, prefix,
fs/affs/super.c:532: * and volume are accepted during remount.
fs/affs/super.c-533- */
--
fs/affs/super.c=614=static int affs_init_fs_context(struct fs_context *fc)
--
fs/affs/super.c-627- * NB: historically, no options other than volume were
fs/affs/super.c:628: * preserved across a remount unless they were explicitly
fs/affs/super.c-629- * passed in.
--
fs/befs/linuxvfs.c=688=befs_parse_param(struct fs_context *fc, struct fs_parameter *param)
--
fs/befs/linuxvfs.c-693-
fs/befs/linuxvfs.c:694: /* befs ignores all options on remount */
fs/befs/linuxvfs.c-695- if (fc->purpose == FS_CONTEXT_FOR_RECONFIGURE)
--
fs/btrfs/block-group.c=1538=void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
--
fs/btrfs/block-group.c-1742- /*
fs/btrfs/block-group.c:1743: * DISCARD can flip during remount. On zoned filesystems, we
fs/btrfs/block-group.c-1744- * need to reset sequential-required zones.
--
fs/btrfs/block-group.c=1879=void btrfs_reclaim_bgs_work(struct work_struct *work)
--
fs/btrfs/block-group.c-1986- * for the read-only case. As we did take the super write lock,
fs/btrfs/block-group.c:1987: * "mount -o remount,ro" won't happen and read-only filesystem
fs/btrfs/block-group.c-1988- * means it is forced read-only due to a fatal error. So, it
--
fs/btrfs/compression.c=1003=struct compressed_bio *btrfs_compress_bio(struct btrfs_inode *inode,
--
fs/btrfs/compression.c-1033- /*
fs/btrfs/compression.c:1034: * This can happen when compression races with remount setting
fs/btrfs/compression.c-1035- * it to 'no compress', while caller doesn't call
--
fs/btrfs/dev-replace.c=1266=bool __pure btrfs_dev_replace_is_ongoing(struct btrfs_dev_replace *dev_replace)
--
fs/btrfs/dev-replace.c-1282- * the tgtdev is missing (or "btrfs dev scan") was
fs/btrfs/dev-replace.c:1283: * not called and the filesystem is remounted
fs/btrfs/dev-replace.c-1284- * in degraded state. This does not stop the
--
fs/btrfs/disk-io.c=1420=static int cleaner_kthread(void *arg)
--
fs/btrfs/disk-io.c-1461- /*
fs/btrfs/disk-io.c:1462: * The defragger has dealt with the R/O remount and umount,
fs/btrfs/disk-io.c-1463- * needn't do anything special here.
--
fs/btrfs/disk-io.c=2982=static int btrfs_cleanup_fs_roots(struct btrfs_fs_info *fs_info)
--
fs/btrfs/disk-io.c-3034- * Mounting logic specific to read-write file systems. Shared by open_ctree
fs/btrfs/disk-io.c:3035: * and btrfs_remount when remounting from read-only to read-write.
fs/btrfs/disk-io.c-3036- */
--
fs/btrfs/disk-io.c=3180=int btrfs_check_features(struct btrfs_fs_info *fs_info, bool is_rw_mount)
--
fs/btrfs/disk-io.c-3264-
fs/btrfs/disk-io.c:3265: /* This can be called by remount, we need to protect the super block. */
fs/btrfs/disk-io.c-3266- spin_lock(&fs_info->super_lock);
--
fs/btrfs/fs.h=112=enum {
fs/btrfs/fs.h-113- /*
fs/btrfs/fs.h:114: * Filesystem is being remounted, allow to skip some operations, like
fs/btrfs/fs.h-115- * defrag
--
fs/btrfs/fs.h=487=struct btrfs_fs_info {
--
fs/btrfs/fs.h-579- * wrong number because we will write out the data into a regular
fs/btrfs/fs.h:580: * extent. The write side(mount/remount) is under ->s_umount lock,
fs/btrfs/fs.h-581- * so it is also safe.
--
fs/btrfs/fs.h=1152=static inline bool btrfs_fs_closing_done(const struct btrfs_fs_info *fs_info)
--
fs/btrfs/fs.h-1160-/*
fs/btrfs/fs.h:1161: * If we remount the fs to be R/O or umount the fs, the cleaner needn't do
fs/btrfs/fs.h-1162- * anything except sleeping. This function is used to check the status of
fs/btrfs/fs.h-1163- * the fs.
fs/btrfs/fs.h:1164: * We check for BTRFS_FS_STATE_RO to avoid races with a concurrent remount,
fs/btrfs/fs.h-1165- * since setting and checking for SB_RDONLY in the superblock's flags is not
--
fs/btrfs/super.c=340=static int btrfs_parse_param(struct fs_context *fc, struct fs_parameter *param)
--
fs/btrfs/super.c-457- /*
fs/btrfs/super.c:458: * VFS limits the ability to toggle ACL on and off via remount,
fs/btrfs/super.c-459- * despite every file system allowing this. This seems to be
fs/btrfs/super.c-460- * an oversight since we all do, but it'll fail if we're
fs/btrfs/super.c:461: * remounting. So don't set the mask here, we'll check it in
fs/btrfs/super.c-462- * btrfs_reconfigure and do the toggling ourselves.
--
fs/btrfs/super.c-663- * Some options only have meaning at mount time and shouldn't persist across
fs/btrfs/super.c:664: * remounts, or be displayed. Clear these at the end of mount and remount code
fs/btrfs/super.c-665- * paths.
--
fs/btrfs/super.c=686=bool btrfs_check_options(const struct btrfs_fs_info *info,
--
fs/btrfs/super.c-727- * the mount options with the on-disk settings. Before the new mount API took
fs/btrfs/super.c:728: * effect we would do this on mount and remount. With the new mount API we'll
fs/btrfs/super.c-729- * only do this on the initial mount.
--
fs/btrfs/super.c-732- * file system to set the current mount options. If you mounted with special
fs/btrfs/super.c:733: * options to disable these features and then remounted we wouldn't revert the
fs/btrfs/super.c-734- * settings, because mounting without these features cleared the on-disk
--
fs/btrfs/super.c=1236=static void btrfs_resize_thread_pool(struct btrfs_fs_info *fs_info,
--
fs/btrfs/super.c-1256-
fs/btrfs/super.c:1257:static inline void btrfs_remount_begin(struct btrfs_fs_info *fs_info,
fs/btrfs/super.c-1258- unsigned long long old_opts, int flags)
--
fs/btrfs/super.c-1270-
fs/btrfs/super.c:1271:static inline void btrfs_remount_cleanup(struct btrfs_fs_info *fs_info,
fs/btrfs/super.c-1272- unsigned long long old_opts)
--
fs/btrfs/super.c-1297-
fs/btrfs/super.c:1298:static int btrfs_remount_rw(struct btrfs_fs_info *fs_info)
fs/btrfs/super.c-1299-{
--
fs/btrfs/super.c-1303- btrfs_err(fs_info,
fs/btrfs/super.c:1304: "remounting read-write after error is not allowed");
fs/btrfs/super.c-1305- return -EINVAL;
]
] |
| 125/2 | 2026/03/16 15:59 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 115857 output: 19 thoughts: 23 Thoughts: Okay, here's my take on those thoughts, tailored for an expert audience: **Investigating a Potential JFS Remount Issue** Alright, time to dig into this. My first instinct is to zero in on the `jfs_remount` function and see what's happening under the hood. Given the context, the logical place to start is the `fs/jfs/super.c` file. That's where I'd expect to find the superblock operations, and likely, the remount logic related to the JFS filesystem. I need to understand how `remount` is handled at the file system level. What are the potential pitfalls? Are there any obvious race conditions or assumptions that could be causing a problem? It's a classic situation – a function related to a critical operation, so I need to inspect it thoroughly. |
| 126/2 | 2026/03/16 15:59 | tool | grepper |
0mError:no matches Args: map[Expression:jfs_remount] Results: map[Output:] |
| 127/2 | 2026/03/16 15:59 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 116091 output: 16 thoughts: 20 Thoughts: Okay, here's my interpretation of that thought process, assuming I'm an expert in file systems and kernel internals: **Diving into JFS Superblock Reconfiguration** Right, let's get down to brass tacks. I need to understand how the JFS superblock handles reconfiguration events. My immediate instinct is to check the core superblock implementation, so I'm thinking specifically about the `fs/jfs/super.c` file. The keyword that leaps to mind is `reconfigure` – that's the likely search term to trigger the code responsible for these sorts of operations. I'm visualizing the code structure now, anticipating function names and possible data structures related to superblock modification, versioning, or perhaps even online resizing if JFS supports it. I want to see how these changes are handled at the most fundamental level, before any higher-level abstractions get involved. The "reconfigure" search should give me a good entry point. |
| 128/2 | 2026/03/16 16:00 | tool | grepper |
0mArgs:map[Expression:reconfigure] Results: map[Output:
Full output is too long, showing 500 out of 5493 lines.
Use more precise expression if possible.
[Documentation/ABI/testing/debugfs-driver-dcc=20=Description:
--
Documentation/ABI/testing/debugfs-driver-dcc-24- stored in the driver gets removed and users need to
Documentation/ABI/testing/debugfs-driver-dcc:25: reconfigure addresses again.
Documentation/ABI/testing/debugfs-driver-dcc-26-
--
Documentation/ABI/testing/sysfs-bus-cdx=90=Description:
Documentation/ABI/testing/sysfs-bus-cdx-91- Writing y/1/on to this file removes the corresponding
Documentation/ABI/testing/sysfs-bus-cdx:92: device from the CDX bus. If the device is to be reconfigured
Documentation/ABI/testing/sysfs-bus-cdx:93: reconfigured in the Hardware, the device can be removed, so
Documentation/ABI/testing/sysfs-bus-cdx-94- that the device driver does not access the device while it is
Documentation/ABI/testing/sysfs-bus-cdx:95: being reconfigured.
Documentation/ABI/testing/sysfs-bus-cdx-96-
--
Documentation/admin-guide/blockdev/ramdisk.rst=24=so that the VM subsystem does not try to reclaim them later.
Documentation/admin-guide/blockdev/ramdisk.rst-25-
Documentation/admin-guide/blockdev/ramdisk.rst:26:The RAM disk supports up to 16 RAM disks by default, and can be reconfigured
Documentation/admin-guide/blockdev/ramdisk.rst-27-to support an unlimited number of RAM disks (at your own risk). Just change
--
Documentation/admin-guide/device-mapper/vdo-design.rst=60=the number of zones for each structure, and hence the number of threads,
Documentation/admin-guide/device-mapper/vdo-design.rst:61:can be reconfigured each time a vdo target is started.
Documentation/admin-guide/device-mapper/vdo-design.rst-62-
--
Documentation/admin-guide/gpio/gpio-aggregator.rst=144=case when you create it using sysfs ``new_device`` interface.
--
Documentation/admin-guide/gpio/gpio-aggregator.rst-151- lines, you must use the "delete_device" interface to tear down the
Documentation/admin-guide/gpio/gpio-aggregator.rst:152: existing device and reconfigure it from scratch. However, you can still
Documentation/admin-guide/gpio/gpio-aggregator.rst-153- toggle the aggregator with the ``live`` attribute and adjust the
--
Documentation/admin-guide/hw-vuln/attack_vector_controls.rst=11=When new relevant CPU vulnerabilities are found, they will be added to these
Documentation/admin-guide/hw-vuln/attack_vector_controls.rst:12:attack vector controls so administrators will likely not need to reconfigure
Documentation/admin-guide/hw-vuln/attack_vector_controls.rst-13-their command line parameters as mitigations will continue to be correctly
--
Documentation/admin-guide/md.rst=191=All md devices contain:
--
Documentation/admin-guide/md.rst-206- Some raid levels allow this value to be set while the array is
Documentation/admin-guide/md.rst:207: active. This will reconfigure the array. Otherwise it can only
Documentation/admin-guide/md.rst-208- be set while assembling an array.
--
Documentation/admin-guide/md.rst-327- suspended (not supported yet)
Documentation/admin-guide/md.rst:328: All IO requests will block. The array can be reconfigured.
Documentation/admin-guide/md.rst-329-
--
Documentation/admin-guide/pm/intel_idle.rst=68=Each ``MWAIT`` hint value is interpreted by the processor as a license to
Documentation/admin-guide/pm/intel_idle.rst:69:reconfigure itself in a certain way in order to save energy. The processor
Documentation/admin-guide/pm/intel_idle.rst-70-configurations (with reduced power draw) resulting from that are referred to
--
Documentation/arch/powerpc/eeh-pci-error-recovery.rst=182=hoping to give the user-space scripts enough time to complete.
Documentation/arch/powerpc/eeh-pci-error-recovery.rst:183:It then resets the PCI card, reconfigures the device BAR's, and
Documentation/arch/powerpc/eeh-pci-error-recovery.rst-184-any bridges underneath. It then calls rpaphp_enable_pci_slot(),
--
Documentation/arch/s390/vfio-ap.rst=960=This will remove all of the matrix mdev device's sysfs structures including
Documentation/arch/s390/vfio-ap.rst:961:the mdev device itself. To recreate and reconfigure the matrix mdev device,
Documentation/arch/s390/vfio-ap.rst-962-all of the steps starting with step 3 will have to be performed again. Note
--
Documentation/arch/s390/vfio-ap.rst=966=remove it if no guest will use it during the remaining lifetime of the linux
Documentation/arch/s390/vfio-ap.rst:967:host. If the vfio_ap mdev is removed, one may want to also reconfigure
Documentation/arch/s390/vfio-ap.rst-968-the pool of adapters and queues reserved for use by the default drivers.
--
Documentation/devicetree/bindings/cache/l2c2x0.yaml=12=description: |+
--
Documentation/devicetree/bindings/cache/l2c2x0.yaml-22- cache controllers as found in e.g. Cortex-A15/A7/A57/A53. These
Documentation/devicetree/bindings/cache/l2c2x0.yaml:23: integrated L2 controllers are assumed to be all preconfigured by
Documentation/devicetree/bindings/cache/l2c2x0.yaml-24- early secure boot code. Thus no need to deal with their configuration
--
Documentation/devicetree/bindings/fpga/fpga-region.yaml=12=description: |
--
Documentation/devicetree/bindings/fpga/fpga-region.yaml-164- shutting down bridges that are upstream from the other active regions while one
Documentation/devicetree/bindings/fpga/fpga-region.yaml:165: region is getting reconfigured (see Figure 1 above). During PR, the FPGA's
Documentation/devicetree/bindings/fpga/fpga-region.yaml-166- hardware bridges remain enabled. The PR regions' bridges will be FPGA bridges
--
Documentation/devicetree/bindings/fpga/intel,stratix10-soc-fpga-mgr.yaml=14=description:
--
Documentation/devicetree/bindings/fpga/intel,stratix10-soc-fpga-mgr.yaml-16- processor system (HPS) and a Secure Device Manager (SDM). The Stratix10
Documentation/devicetree/bindings/fpga/intel,stratix10-soc-fpga-mgr.yaml:17: SoC FPGA Manager driver is used to configure/reconfigure the FPGA fabric
Documentation/devicetree/bindings/fpga/intel,stratix10-soc-fpga-mgr.yaml-18- on the die.The driver communicates with SDM/ATF via the stratix10-svc
--
Documentation/devicetree/bindings/mmc/ti-omap-hsmmc.txt=102=suspend.
Documentation/devicetree/bindings/mmc/ti-omap-hsmmc.txt-103-
Documentation/devicetree/bindings/mmc/ti-omap-hsmmc.txt:104:The workaround is to reconfigure the dat1 line as a GPIO upon suspend. To make
Documentation/devicetree/bindings/mmc/ti-omap-hsmmc.txt-105-this work, we need to set the named pinctrl states "default" and "idle".
--
Documentation/devicetree/bindings/net/ethernet-controller.yaml=268=additionalProperties: true
--
Documentation/devicetree/bindings/net/ethernet-controller.yaml-318-# and pass the value to the PHY. The PHY will then implement delays as
Documentation/devicetree/bindings/net/ethernet-controller.yaml:319:# specified by the 'phy-mode'. The PHY should always be reconfigured
Documentation/devicetree/bindings/net/ethernet-controller.yaml-320-# to implement the needed delays, replacing any setting performed by
--
Documentation/devicetree/bindings/net/fsl,enetc-ierb.yaml=9=description:
Documentation/devicetree/bindings/net/fsl,enetc-ierb.yaml-10- The fsl_enetc driver can probe on the Integrated Endpoint Register Block,
Documentation/devicetree/bindings/net/fsl,enetc-ierb.yaml:11: which preconfigures the FIFO limits for the ENETC ports.
Documentation/devicetree/bindings/net/fsl,enetc-ierb.yaml-12-
--
Documentation/devicetree/bindings/net/qca,qca7000.yaml=12=description: |
--
Documentation/devicetree/bindings/net/qca,qca7000.yaml-24- In order to use the QCA7000 as UART slave it must be defined as a child of a
Documentation/devicetree/bindings/net/qca,qca7000.yaml:25: UART master in the device tree. It is possible to preconfigure the UART
Documentation/devicetree/bindings/net/qca,qca7000.yaml-26- settings of the QCA7000 firmware, but it's not possible to change them during
--
Documentation/devicetree/bindings/pci/host-generic-pci.yaml=36=properties:
--
Documentation/devicetree/bindings/pci/host-generic-pci.yaml-71- perform any low level setup of clocks, PHYs or device registers, nor
Documentation/devicetree/bindings/pci/host-generic-pci.yaml:72: is there any reason for the driver to reconfigure ATU windows for
Documentation/devicetree/bindings/pci/host-generic-pci.yaml-73- config and/or IO space accesses at runtime.
--
Documentation/devicetree/bindings/pinctrl/pinctrl-bindings.txt=14=single static pin configuration, e.g. set up during initialization. Others
Documentation/devicetree/bindings/pinctrl/pinctrl-bindings.txt:15:need to reconfigure pins at run-time, for example to tri-state pins when the
Documentation/devicetree/bindings/pinctrl/pinctrl-bindings.txt-16-device is inactive. Hence, each client device can define a set of named
--
Documentation/devicetree/bindings/power/reset/gpio-poweroff.yaml=24=properties:
--
Documentation/devicetree/bindings/power/reset/gpio-poweroff.yaml-33- description: >
Documentation/devicetree/bindings/power/reset/gpio-poweroff.yaml:34: Initially configure the GPIO line as an input. Only reconfigure
Documentation/devicetree/bindings/power/reset/gpio-poweroff.yaml-35- it to an output when the power-off sequence is initiated. If this optional
--
Documentation/driver-api/crypto/iaa/iaa-crypto.rst=455=If the IAA devices and workqueues haven't previously been disabled and
Documentation/driver-api/crypto/iaa/iaa-crypto.rst:456:reconfigured, then the default configuration should be in place and no
Documentation/driver-api/crypto/iaa/iaa-crypto.rst-457-further IAA configuration is necessary. See :ref:`iaa_default_config`
--
Documentation/driver-api/pm/devices.rst=635=The main difference from ``resume[_early|_noirq]`` is that
Documentation/driver-api/pm/devices.rst-636-``restore[_early|_noirq]`` must assume the device has been accessed and
Documentation/driver-api/pm/devices.rst:637:reconfigured by the boot loader or the restore kernel. Consequently, the state
Documentation/driver-api/pm/devices.rst-638-of the device may be different from the state remembered from the ``freeze``,
--
Documentation/driver-api/serial/driver.rst=56=The port_sem semaphore is used to protect against ports being added/
Documentation/driver-api/serial/driver.rst:57:removed or reconfigured at inappropriate times. Since v2.6.27, this
Documentation/driver-api/serial/driver.rst-58-semaphore has been the 'mutex' member of the tty_port struct, and
--
Documentation/driver-api/thermal/intel_dptf.rst=135=ABI.
--
Documentation/driver-api/thermal/intel_dptf.rst-168- When set to 1, the power floor of the system in the current
Documentation/driver-api/thermal/intel_dptf.rst:169: configuration has been reached. It needs to be reconfigured to allow
Documentation/driver-api/thermal/intel_dptf.rst-170- power to be reduced any further.
--
Documentation/driver-api/usb/typec_bus.rst=51=If the communication with the partner using the SVID specific commands results
Documentation/driver-api/usb/typec_bus.rst:52:in need to reconfigure the pins on the connector, the alternate mode driver
Documentation/driver-api/usb/typec_bus.rst-53-needs to notify the bus using :c:func:`typec_altmode_notify()`. The driver
--
Documentation/fb/ep93xx-fb.rst=131=option when loading the driver::
--
Documentation/fb/ep93xx-fb.rst-134-
Documentation/fb/ep93xx-fb.rst:135:In some cases it may be possible to reconfigure your SDRAM layout to
Documentation/fb/ep93xx-fb.rst-136-avoid this bug. See section 13 of the EP93xx users' guide for details.
--
Documentation/filesystems/mount_api.rst=86=The fs_context fields are as follows:
--
Documentation/filesystems/mount_api.rst-100- A pointer to the file_system_type of the filesystem that is being
Documentation/filesystems/mount_api.rst:101: constructed or reconfigured. This retains a reference on the type owner.
Documentation/filesystems/mount_api.rst-102-
--
Documentation/filesystems/mount_api.rst=209=The filesystem context points to a table of operations::
--
Documentation/filesystems/mount_api.rst-217- int (*get_tree)(struct fs_context *fc);
Documentation/filesystems/mount_api.rst:218: int (*reconfigure)(struct fs_context *fc);
Documentation/filesystems/mount_api.rst-219- };
--
Documentation/filesystems/mount_api.rst=222=manage the filesystem context. They are as follows:
--
Documentation/filesystems/mount_api.rst-291-
Documentation/filesystems/mount_api.rst:292: int (*reconfigure)(struct fs_context *fc);
Documentation/filesystems/mount_api.rst-293-
--
Documentation/filesystems/mount_api.rst=308=number of operations used by the new mount code for this purpose:
--
Documentation/filesystems/mount_api.rst-320- reconfiguration (FS_CONTEXT_FOR_RECONFIGURE) in which case it indicates
Documentation/filesystems/mount_api.rst:321: the root dentry of the superblock to be reconfigured. It will also be
Documentation/filesystems/mount_api.rst-322- non-NULL in the case of a submount (FS_CONTEXT_FOR_SUBMOUNT) in which case
--
Documentation/filesystems/mount_api.rst-363- as a whole and to do any necessary allocation so that
Documentation/filesystems/mount_api.rst:364: security_sb_get_tree() and security_sb_reconfigure() are less likely to
Documentation/filesystems/mount_api.rst-365- fail. It should return 0 or a negative error code.
--
Documentation/filesystems/mount_api.rst-379-
Documentation/filesystems/mount_api.rst:380: void security_sb_reconfigure(struct fs_context *fc);
Documentation/filesystems/mount_api.rst-381-
--
Documentation/filesystems/mount_api.rst=401=destroying a context:
--
Documentation/filesystems/mount_api.rst-417-
Documentation/filesystems/mount_api.rst:418: struct fs_context *fs_context_for_reconfigure(
Documentation/filesystems/mount_api.rst-419- struct dentry *dentry,
--
Documentation/filesystems/tmpfs.rst=19=tmpfs extends ramfs with a few userspace configurable options listed and
Documentation/filesystems/tmpfs.rst:20:explained further below, some of which can be reconfigured dynamically on the
Documentation/filesystems/tmpfs.rst-21-fly using a remount ('mount -o remount ...') of the filesystem. A tmpfs
--
Documentation/fpga/dfl.rst=360=Partial Reconfiguration
Documentation/fpga/dfl.rst-361-=======================
Documentation/fpga/dfl.rst:362:As mentioned above, accelerators can be reconfigured through partial
Documentation/fpga/dfl.rst-363-reconfiguration of a PR bitstream file. The PR bitstream file must have been
--
Documentation/gpu/amdgpu/display/dcn-overview.rst=58=Always approach the DCN architecture as something flexible that can be
Documentation/gpu/amdgpu/display/dcn-overview.rst:59:configured and reconfigured in multiple ways; in other words, each block can be
Documentation/gpu/amdgpu/display/dcn-overview.rst-60-setup or ignored accordingly with userspace demands. For example, if we
--
Documentation/gpu/vkms.rst=234=Runtime Configuration
--
Documentation/gpu/vkms.rst-236-
Documentation/gpu/vkms.rst:237:We want to be able to reconfigure vkms instance without having to reload the
Documentation/gpu/vkms.rst-238-module through configfs. Use/Test-cases:
--
Documentation/networking/bonding.rst=2863=slaves and remains persistent (even if the first slave is removed) until
Documentation/networking/bonding.rst:2864:the bonding device is brought down or reconfigured.
Documentation/networking/bonding.rst-2865-
--
Documentation/networking/device_drivers/ethernet/cirrus/cs89x0.rst=458=another device on the ISA bus. In this case, you should use the Manual Setup
Documentation/networking/device_drivers/ethernet/cirrus/cs89x0.rst:459:option to reconfigure the adapter by selecting a different value for the system
Documentation/networking/device_drivers/ethernet/cirrus/cs89x0.rst-460-resource that failed.
--
Documentation/networking/device_drivers/ethernet/cirrus/cs89x0.rst=535=diagnostic self-test. Normally, the ISA resource in conflict will fail the
Documentation/networking/device_drivers/ethernet/cirrus/cs89x0.rst:536:self-test. If so, reconfigure the adapter selecting another choice for the
Documentation/networking/device_drivers/ethernet/cirrus/cs89x0.rst-537-resource in conflict. Run the diagnostics again to check for further IO
--
Documentation/networking/device_drivers/ethernet/cirrus/cs89x0.rst=540=In some cases, such as when the PC will not boot, it may be necessary to remove
Documentation/networking/device_drivers/ethernet/cirrus/cs89x0.rst:541:the adapter and reconfigure it by installing it in another PC to run the
Documentation/networking/device_drivers/ethernet/cirrus/cs89x0.rst-542-CS8900/20 Setup Utility. Once reinstalled in the target system, run the
--
Documentation/networking/dsa/sja1105.rst=28=table takes a number of entries. Some configuration tables can be (partially)
Documentation/networking/dsa/sja1105.rst:29:reconfigured at runtime, some not. Some tables are mandatory, some not:
Documentation/networking/dsa/sja1105.rst-30-
--
Documentation/networking/netconsole.rst=105=remote logging targets to be dynamically added, removed, or have their
Documentation/networking/netconsole.rst:106:parameters reconfigured at runtime from a configfs-based userspace interface.
Documentation/networking/netconsole.rst-107-
--
Documentation/networking/netdev-features.rst=87=care must be taken as the change won't affect already configured VLANs.
--
Documentation/networking/netdev-features.rst-90-
Documentation/networking/netdev-features.rst:91:Hardware should be reconfigured to match passed feature set. The set
Documentation/networking/netdev-features.rst-92-should not be altered unless some error condition happens that can't
--
Documentation/networking/switchdev.rst=505=Because VLAN filtering can be turned on/off at runtime, the switchdev driver
Documentation/networking/switchdev.rst:506:must be able to reconfigure the underlying hardware on the fly to honor the
Documentation/networking/switchdev.rst-507-toggling of that option and behave appropriately. If that is not possible, the
--
Documentation/networking/switchdev.rst=557=Because IGMP snooping can be turned on/off at runtime, the switchdev driver
Documentation/networking/switchdev.rst:558:must be able to reconfigure the underlying hardware on the fly to honor the
Documentation/networking/switchdev.rst-559-toggling of that option and behave appropriately.
--
Documentation/process/debugging/kgdb.rst=324=ways to activate this feature.
--
Documentation/process/debugging/kgdb.rst-337- setting will not take effect until the next point the I/O is
Documentation/process/debugging/kgdb.rst:338: reconfigured.
Documentation/process/debugging/kgdb.rst-339-
--
Documentation/process/debugging/kgdb.rst=512=unregister all the kernel hook points.
Documentation/process/debugging/kgdb.rst-513-
Documentation/process/debugging/kgdb.rst:514:All kgdb I/O drivers can be reconfigured at run time, if
Documentation/process/debugging/kgdb.rst-515-``CONFIG_SYSFS`` and ``CONFIG_MODULES`` are enabled, by echo'ing a new
--
Documentation/sound/cards/audiophile-usb.rst=112=by the applications: for instance if the user is recording in a
Documentation/sound/cards/audiophile-usb.rst-113-24bit-depth-mode and immediately after wants to switch to a 16bit-depth mode,
Documentation/sound/cards/audiophile-usb.rst:114:the snd-usb-audio module will reconfigure the device on the fly.
Documentation/sound/cards/audiophile-usb.rst-115-
--
Documentation/timers/no_hz.rst=223=Known Issues
--
Documentation/timers/no_hz.rst-276-
Documentation/timers/no_hz.rst:277:* A reboot is required to reconfigure both adaptive idle and RCU
Documentation/timers/no_hz.rst-278- callback offloading. Runtime reconfiguration could be provided
--
Documentation/userspace-api/gpio/gpio-get-lineinfo-watch-ioctl.rst=40=information. Changes to line info include a line being requested, released
Documentation/userspace-api/gpio/gpio-get-lineinfo-watch-ioctl.rst:41:or reconfigured.
Documentation/userspace-api/gpio/gpio-get-lineinfo-watch-ioctl.rst-42-
--
Documentation/userspace-api/gpio/gpio-v2-get-lineinfo-watch-ioctl.rst=36=information. Changes to line info include a line being requested, released
Documentation/userspace-api/gpio/gpio-v2-get-lineinfo-watch-ioctl.rst:37:or reconfigured.
Documentation/userspace-api/gpio/gpio-v2-get-lineinfo-watch-ioctl.rst-38-
--
Documentation/userspace-api/gpio/sysfs.rst=73=and have the following read/write attributes:
--
Documentation/userspace-api/gpio/sysfs.rst-86- it was exported by kernel code that didn't explicitly
Documentation/userspace-api/gpio/sysfs.rst:87: allow userspace to reconfigure this GPIO's direction.
Documentation/userspace-api/gpio/sysfs.rst-88-
--
Documentation/userspace-api/media/drivers/omap3isp-uapi.rst=140=The statistics subdevs do offer more dynamic configuration options than the
Documentation/userspace-api/media/drivers/omap3isp-uapi.rst:141:other subdevs. They can be enabled, disable and reconfigured when the pipeline
Documentation/userspace-api/media/drivers/omap3isp-uapi.rst-142-is in streaming state.
--
Documentation/userspace-api/media/drivers/omap3isp-uapi.rst=176=internally allocated if the module is enabled. It will return -EBUSY on this
Documentation/userspace-api/media/drivers/omap3isp-uapi.rst:177:case. In order to avoid such condition, either disable/reconfigure/enable the
Documentation/userspace-api/media/drivers/omap3isp-uapi.rst-178-module or request the necessary buffer size during the first configuration
--
Documentation/userspace-api/media/mediactl/request-api.rst=10=modern devices (stateless codecs, complex camera pipelines, ...) and APIs
Documentation/userspace-api/media/mediactl/request-api.rst-11-(Android Codec v2). One such requirement is the ability for devices belonging to
Documentation/userspace-api/media/mediactl/request-api.rst:12:the same pipeline to reconfigure and collaborate closely on a per-frame basis.
Documentation/userspace-api/media/mediactl/request-api.rst-13-Another is support of stateless codecs, which require controls to be applied
--
Documentation/userspace-api/media/mediactl/request-api.rst=20=it is, it is terribly inefficient: user-space would have to flush all activity
Documentation/userspace-api/media/mediactl/request-api.rst:21:on the media pipeline, reconfigure it for the next frame, queue the buffers to
Documentation/userspace-api/media/mediactl/request-api.rst-22-be processed with that configuration, and wait until they are all available for
--
Documentation/userspace-api/media/v4l/dev-decoder.rst=255=Initialization
--
Documentation/userspace-api/media/v4l/dev-decoder.rst-394- those parsed by the decoder, a `Dynamic Resolution Change` will be
Documentation/userspace-api/media/v4l/dev-decoder.rst:395: triggered to reconfigure them.
Documentation/userspace-api/media/v4l/dev-decoder.rst-396-
--
Documentation/userspace-api/media/v4l/vidioc-g-parm.rst=38=Applications can request a different frame interval. The capture or
Documentation/userspace-api/media/v4l/vidioc-g-parm.rst:39:output device will be reconfigured to support the requested frame
Documentation/userspace-api/media/v4l/vidioc-g-parm.rst-40-interval if possible. Optionally drivers may choose to skip or
--
Documentation/userspace-api/media/v4l/vidioc-subdev-g-routing.rst=50=All stream configurations are reset when ``VIDIOC_SUBDEV_S_ROUTING`` is called.
Documentation/userspace-api/media/v4l/vidioc-subdev-g-routing.rst:51:This means that the userspace must reconfigure all stream formats and selections
Documentation/userspace-api/media/v4l/vidioc-subdev-g-routing.rst-52-after calling the ioctl with e.g. ``VIDIOC_SUBDEV_S_FMT``.
--
Documentation/virt/uml/user_mode_linux_howto_v2.rst=474=As the tunnel is fixed these are not negotiated and they are
Documentation/virt/uml/user_mode_linux_howto_v2.rst:475:preconfigured on both ends.
Documentation/virt/uml/user_mode_linux_howto_v2.rst-476-
--
Documentation/watchdog/watchdog-kernel-api.rst=334=governor framework (kbuild CONFIG_WATCHDOG_PRETIMEOUT_GOV symbol) is enabled,
Documentation/watchdog/watchdog-kernel-api.rst:335:an action is taken by a preconfigured pretimeout governor preassigned to
Documentation/watchdog/watchdog-kernel-api.rst-336-the watchdog device. If watchdog pretimeout governor framework is not
--
arch/alpha/kernel/setup.c=420=setup_arch(char **cmdline_p)
--
arch/alpha/kernel/setup.c-537- /*
arch/alpha/kernel/setup.c:538: * Identify and reconfigure for the current system.
arch/alpha/kernel/setup.c-539- */
--
arch/arc/mm/cache.c=933=static noinline void __init arc_ioc_setup(void)
--
arch/arc/mm/cache.c-938- * If IOC was already enabled (due to bootloader) it technically needs to
arch/arc/mm/cache.c:939: * be reconfigured with aperture base,size corresponding to Linux memory map
arch/arc/mm/cache.c-940- * which will certainly be different than uboot's. But disabling and
--
arch/arm/boot/dts/intel/ixp/intel-ixp42x-gateworks-gw2348.dts=82= ide@1,0 {
--
arch/arm/boot/dts/intel/ixp/intel-ixp42x-gateworks-gw2348.dts-85- * Set up expansion bus config to a really slow timing.
arch/arm/boot/dts/intel/ixp/intel-ixp42x-gateworks-gw2348.dts:86: * The CF driver will dynamically reconfigure these timings
arch/arm/boot/dts/intel/ixp/intel-ixp42x-gateworks-gw2348.dts-87- * depending on selected PIO mode (0-4).
--
arch/arm/boot/dts/intel/ixp/intel-ixp43x-gateworks-gw2358.dts=98= ide@3,0 {
--
arch/arm/boot/dts/intel/ixp/intel-ixp43x-gateworks-gw2358.dts-101- * Set up expansion bus config to a really slow timing.
arch/arm/boot/dts/intel/ixp/intel-ixp43x-gateworks-gw2358.dts:102: * The CF driver will dynamically reconfigure these timings
arch/arm/boot/dts/intel/ixp/intel-ixp43x-gateworks-gw2358.dts-103- * depending on selected PIO mode (0-4).
--
arch/arm/mach-omap2/omap_hwmod.h=376=struct omap_hwmod_omap4_prcm {
--
arch/arm/mach-omap2/omap_hwmod.h-432- * so the wakeups are still functional (Only known case for now is UART)
arch/arm/mach-omap2/omap_hwmod.h:433: * HWMOD_RECONFIG_IO_CHAIN: omap_hwmod code needs to reconfigure wake-up
arch/arm/mach-omap2/omap_hwmod.h:434: * events by calling _reconfigure_io_chain() when a device is enabled
arch/arm/mach-omap2/omap_hwmod.h-435- * or idled.
--
arch/arm/mach-omap2/prcm-common.h=468=struct omap_prcm_irq {
--
arch/arm/mach-omap2/prcm-common.h-486- * @restore_irqen: fn ptr to save and clear IRQENABLE regs
arch/arm/mach-omap2/prcm-common.h:487: * @reconfigure_io_chain: fn ptr to reconfigure IO chain
arch/arm/mach-omap2/prcm-common.h-488- * @saved_mask: IRQENABLE regs are saved here during suspend
--
arch/arm/mach-omap2/prcm-common.h=498=struct omap_prcm_irq_setup {
--
arch/arm/mach-omap2/prcm-common.h-509- void (*restore_irqen)(u32 *saved_mask);
arch/arm/mach-omap2/prcm-common.h:510: void (*reconfigure_io_chain)(void);
arch/arm/mach-omap2/prcm-common.h-511- u32 *saved_mask;
--
arch/arm/mach-omap2/prm3xxx.c=42=static struct omap_prcm_irq_setup omap3_prcm_irq_setup = {
--
arch/arm/mach-omap2/prm3xxx.c-52- .restore_irqen = &omap3xxx_prm_restore_irqen,
arch/arm/mach-omap2/prm3xxx.c:53: .reconfigure_io_chain = NULL,
arch/arm/mach-omap2/prm3xxx.c-54-};
--
arch/arm/mach-omap2/prm3xxx.c=288=void __init omap3_prm_init_pm(bool has_uart4, bool has_iva)
--
arch/arm/mach-omap2/prm3xxx.c-379-/**
arch/arm/mach-omap2/prm3xxx.c:380: * omap3430_pre_es3_1_reconfigure_io_chain - restart wake-up daisy chain
arch/arm/mach-omap2/prm3xxx.c-381- *
--
arch/arm/mach-omap2/prm3xxx.c-384- */
arch/arm/mach-omap2/prm3xxx.c:385:static void omap3430_pre_es3_1_reconfigure_io_chain(void)
arch/arm/mach-omap2/prm3xxx.c-386-{
--
arch/arm/mach-omap2/prm3xxx.c-394-/**
arch/arm/mach-omap2/prm3xxx.c:395: * omap3_prm_reconfigure_io_chain - clear latches and reconfigure I/O chain
arch/arm/mach-omap2/prm3xxx.c-396- *
--
arch/arm/mach-omap2/prm3xxx.c-402- */
arch/arm/mach-omap2/prm3xxx.c:403:static void omap3_prm_reconfigure_io_chain(void)
arch/arm/mach-omap2/prm3xxx.c-404-{
--
arch/arm/mach-omap2/prm3xxx.c-430- * wakeups to occur, WAKEUPENABLE bits must be set in the pad mux
arch/arm/mach-omap2/prm3xxx.c:431: * registers, and omap3xxx_prm_reconfigure_io_chain() must be called.
arch/arm/mach-omap2/prm3xxx.c-432- * No return value.
--
arch/arm/mach-omap2/prm3xxx.c=689=static int omap3xxx_prm_late_init(void)
--
arch/arm/mach-omap2/prm3xxx.c-697- if (omap3_has_io_chain_ctrl())
arch/arm/mach-omap2/prm3xxx.c:698: omap3_prcm_irq_setup.reconfigure_io_chain =
arch/arm/mach-omap2/prm3xxx.c:699: omap3_prm_reconfigure_io_chain;
arch/arm/mach-omap2/prm3xxx.c-700- else
arch/arm/mach-omap2/prm3xxx.c:701: omap3_prcm_irq_setup.reconfigure_io_chain =
arch/arm/mach-omap2/prm3xxx.c:702: omap3430_pre_es3_1_reconfigure_io_chain;
arch/arm/mach-omap2/prm3xxx.c-703-
--
arch/arm/mach-omap2/prm44xx.c=38=static void omap44xx_prm_restore_irqen(u32 *saved_mask);
arch/arm/mach-omap2/prm44xx.c:39:static void omap44xx_prm_reconfigure_io_chain(void);
arch/arm/mach-omap2/prm44xx.c-40-
--
arch/arm/mach-omap2/prm44xx.c=45=static struct omap_prcm_irq_setup omap4_prcm_irq_setup = {
--
arch/arm/mach-omap2/prm44xx.c-55- .restore_irqen = &omap44xx_prm_restore_irqen,
arch/arm/mach-omap2/prm44xx.c:56: .reconfigure_io_chain = &omap44xx_prm_reconfigure_io_chain,
arch/arm/mach-omap2/prm44xx.c-57-};
--
arch/arm/mach-omap2/prm44xx.c=286=static void omap44xx_prm_restore_irqen(u32 *saved_mask)
--
arch/arm/mach-omap2/prm44xx.c-296-/**
arch/arm/mach-omap2/prm44xx.c:297: * omap44xx_prm_reconfigure_io_chain - clear latches and reconfigure I/O chain
arch/arm/mach-omap2/prm44xx.c-298- *
--
arch/arm/mach-omap2/prm44xx.c-304- */
arch/arm/mach-omap2/prm44xx.c:305:static void omap44xx_prm_reconfigure_io_chain(void)
arch/arm/mach-omap2/prm44xx.c-306-{
--
arch/arm/mach-omap2/prm44xx.c-348- * to occur, WAKEUPENABLE bits must be set in the pad mux registers, and
arch/arm/mach-omap2/prm44xx.c:349: * omap44xx_prm_reconfigure_io_chain() must be called. No return value.
arch/arm/mach-omap2/prm44xx.c-350- */
--
arch/arm/mach-omap2/prm_common.c=265=int omap_prcm_register_chain_handler(struct omap_prcm_irq_setup *irq_setup)
--
arch/arm/mach-omap2/prm_common.c-343- irq = omap_prcm_event_to_irq("io");
arch/arm/mach-omap2/prm_common.c:344: omap_pcs_legacy_init(irq, irq_setup->reconfigure_io_chain);
arch/arm/mach-omap2/prm_common.c-345-
--
arch/arm/mach-omap2/sleep34xx.S=509=l2_inv_api_params:
--
arch/arm/mach-omap2/sleep34xx.S-517- * This function implements the erratum ID i443 WA, applies to 34xx >= ES3.0
arch/arm/mach-omap2/sleep34xx.S:518: * Copied to and run from SRAM in order to reconfigure the SDRC parameters.
arch/arm/mach-omap2/sleep34xx.S-519- */
--
arch/arm64/boot/dts/freescale/imx8mm-tqma8mqml.dtsi=248=&usdhc3 {
--
arch/arm64/boot/dts/freescale/imx8mm-tqma8mqml.dtsi-263- * Attention:
arch/arm64/boot/dts/freescale/imx8mm-tqma8mqml.dtsi:264: * wdog reset is routed to PMIC, PMIC must be preconfigured to force POR
arch/arm64/boot/dts/freescale/imx8mm-tqma8mqml.dtsi-265- * without LDO for SNVS. GPIO1_IO02 must not be used as GPIO.
--
arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl.dtsi=239=&usdhc3 {
--
arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl.dtsi-254- * Attention:
arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl.dtsi:255: * wdog reset is routed to PMIC, PMIC must be preconfigured to force POR
arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl.dtsi-256- * without LDO for SNVS. GPIO1_IO02 must not be used as GPIO.
--
arch/arm64/boot/dts/freescale/imx8mq-hummingboard-pulse.dts=163= MX8MQ_IOMUXC_SAI2_RXD0_GPIO4_IO23 0x41
--
arch/arm64/boot/dts/freescale/imx8mq-hummingboard-pulse.dts-165- * The following 2 pins need to be commented out and
arch/arm64/boot/dts/freescale/imx8mq-hummingboard-pulse.dts:166: * reconfigured to enable RTS/CTS on UART3
arch/arm64/boot/dts/freescale/imx8mq-hummingboard-pulse.dts-167- */
--
arch/arm64/kvm/pmu-emul.c=1311=void kvm_pmu_nested_transition(struct kvm_vcpu *vcpu)
--
arch/arm64/kvm/pmu-emul.c-1321- /*
arch/arm64/kvm/pmu-emul.c:1322: * We only need to reconfigure events where the filter is
arch/arm64/kvm/pmu-emul.c-1323- * different at EL1 vs. EL2, as we're multiplexing the true EL1
--
arch/mips/bcm63xx/prom.c=21=void __init prom_init(void)
--
arch/mips/bcm63xx/prom.c-87- * Here we will start up CPU1 in the background and ask it to
arch/mips/bcm63xx/prom.c:88: * reconfigure itself then go back to sleep.
arch/mips/bcm63xx/prom.c-89- */
--
arch/mips/bmips/setup.c=82=static void bcm63xx_fixup_cpu1(void)
--
arch/mips/bmips/setup.c-90- * Here we will start up CPU1 in the background and ask it to
arch/mips/bmips/setup.c:91: * reconfigure itself then go back to sleep.
arch/mips/bmips/setup.c-92- */
--
arch/mips/include/asm/mips-cm.h=455=static inline unsigned int mips_cm_vp_id(unsigned int cpu)
--
arch/mips/include/asm/mips-cm.h-479- * mips_cm_unlock_other() calls cannot be pre-empted by anything which may
arch/mips/include/asm/mips-cm.h:480: * reconfigure the redirect/other region, and cannot be interfered with by
arch/mips/include/asm/mips-cm.h-481- * another VP in the core. As such calls to this function should not be nested.
--
arch/powerpc/kernel/pci-hotplug.c=137=void pci_hp_add_devices(struct pci_bus *bus)
--
arch/powerpc/kernel/pci-hotplug.c-174-
arch/powerpc/kernel/pci-hotplug.c:175: /* Scan bridges that need to be reconfigured */
arch/powerpc/kernel/pci-hotplug.c-176- for_each_pci_bridge(dev, bus)
--
arch/powerpc/kernel/watchdog.c=589=void watchdog_hardlockup_set_timeout_pct(u64 pct)
--
arch/powerpc/kernel/watchdog.c-592- WRITE_ONCE(wd_timeout_pct, pct);
arch/powerpc/kernel/watchdog.c:593: lockup_detector_reconfigure();
arch/powerpc/kernel/watchdog.c-594-}
--
arch/powerpc/platforms/512x/mpc512x_shared.c=472=void __init mpc512x_setup_arch(void)
--
arch/powerpc/platforms/512x/mpc512x_shared.c-483- * Perform chip select configuration for devices on LocalPlus Bus.
arch/powerpc/platforms/512x/mpc512x_shared.c:484: * Intended to dynamically reconfigure the chip select parameters
arch/powerpc/platforms/512x/mpc512x_shared.c-485- * for configurable devices on the bus.
--
arch/powerpc/platforms/powernv/eeh-powernv.c=1131=static int pnv_eeh_get_log(struct eeh_pe *pe, int severity,
--
arch/powerpc/platforms/powernv/eeh-powernv.c-1143- *
arch/powerpc/platforms/powernv/eeh-powernv.c:1144: * The function will be called to reconfigure the bridges included
arch/powerpc/platforms/powernv/eeh-powernv.c-1145- * in the specified PE so that the mulfunctional PE would be recovered
--
arch/powerpc/platforms/powernv/pci-ioda.c=991=static void pnv_pci_ioda_dma_dev_setup(struct pci_dev *pdev)
--
arch/powerpc/platforms/powernv/pci-ioda.c-1056- *
]
] |
| 129/2 | 2026/03/16 16:00 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 128599 output: 19 thoughts: 23 Thoughts: Okay, here's my summary, presented as a stream of thought: **Diving into the JFS Reconfiguration Logic** Alright, I need to understand how JFS reconfiguration works. My gut feeling is the key functionality will be in the superblock handling, so I'm going to jump straight into the source. Let's see... `jfs_reconfigure`... that's the function I'm after. Now, where would that be defined within the JFS codebase? Right, I'll bet it's in `fs/jfs`. And focusing on superblock-related operations, the logical place to start looking is `super.c`. Okay, let's fire up the editor and search for that function name within that file. I'm expecting some pretty complex interaction here, dealing with live changes to filesystem parameters and potential data migration. Should be interesting to see how they've handled it. Let's dig in and see what the code reveals! |
| 130/2 | 2026/03/16 16:00 | tool | grepper |
0mArgs:map[Expression:jfs_reconfigure] Results: map[Output:fs/jfs/super.c=250=static int jfs_parse_param(struct fs_context *fc, struct fs_parameter *param)
--
fs/jfs/super.c-360-
fs/jfs/super.c:361:static int jfs_reconfigure(struct fs_context *fc)
fs/jfs/super.c-362-{
--
fs/jfs/super.c=904=static const struct fs_context_operations jfs_context_ops = {
--
fs/jfs/super.c-906- .get_tree = jfs_get_tree,
fs/jfs/super.c:907: .reconfigure = jfs_reconfigure,
fs/jfs/super.c-908- .free = jfs_free_fc,
] |
| 131/2 | 2026/03/16 16:00 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 129067 output: 42 thoughts: 11 Thoughts: Okay, here's my interpretation of that thought process, assuming I'm an expert in the field and providing a fulsome summary: **Initial Assessment: Deep Dive into Reconfiguration** Alright, the mental gears are already turning. The mention of `jfs_reconfigure` immediately triggers a mental checklist. My primary focus is on how system resources, particularly for the JFS file system, are managed. Specifically, any thought about reconfiguration suggests a modification in some parameters. I'm already anticipating questions: Am I dealing with storage capacity, performance tuning, or perhaps addressing a specific error condition related to the JFS implementation? What is the trigger and the desired outcome? Depending on the context, this could involve anything from adjusting journal sizes and block allocation strategies, to altering mount options, or even modifying underlying hardware configurations. I am ready to identify the source and cause of the reconfiguration, and anticipate any potential effects on performance and stability. My task is now to ensure that the reconfiguration aligns with established best practices and risk mitigation strategies to avoid disrupting the system or causing data corruption. |
| 132/2 | 2026/03/16 16:01 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/super.c IncludeLines:true Name:jfs_reconfigure] Results: map[SourceCode: 361: static int jfs_reconfigure(struct fs_context *fc)
362: {
363: struct jfs_context *ctx = fc->fs_private;
364: struct super_block *sb = fc->root->d_sb;
365: int readonly = fc->sb_flags & SB_RDONLY;
366: int rc = 0;
367: int flag = ctx->flag;
368: int ret;
369:
370: sync_filesystem(sb);
371:
372: /* Transfer results of parsing to the sbi */
373: JFS_SBI(sb)->flag = ctx->flag;
374: JFS_SBI(sb)->uid = ctx->uid;
375: JFS_SBI(sb)->gid = ctx->gid;
376: JFS_SBI(sb)->umask = ctx->umask;
377: JFS_SBI(sb)->minblks_trim = ctx->minblks_trim;
378: if (ctx->nls_map != (void *) -1) {
379: unload_nls(JFS_SBI(sb)->nls_tab);
380: JFS_SBI(sb)->nls_tab = ctx->nls_map;
381: }
382: ctx->nls_map = NULL;
383:
384: if (ctx->resize) {
385: if (sb_rdonly(sb)) {
386: pr_err("JFS: resize requires volume to be mounted read-write\n");
387: return -EROFS;
388: }
389:
390: if (!ctx->newLVSize) {
391: ctx->newLVSize = sb_bdev_nr_blocks(sb);
392: if (ctx->newLVSize == 0)
393: pr_err("JFS: Cannot determine volume size\n");
394: }
395:
396: rc = jfs_extendfs(sb, ctx->newLVSize, 0);
397: if (rc)
398: return rc;
399: }
400:
401: if (sb_rdonly(sb) && !readonly) {
402: /*
403: * Invalidate any previously read metadata. fsck may have
404: * changed the on-disk data since we mounted r/o
405: */
406: truncate_inode_pages(JFS_SBI(sb)->direct_inode->i_mapping, 0);
407:
408: JFS_SBI(sb)->flag = flag;
409: ret = jfs_mount_rw(sb, 1);
410:
411: /* mark the fs r/w for quota activity */
412: sb->s_flags &= ~SB_RDONLY;
413:
414: dquot_resume(sb, -1);
415: return ret;
416: }
417: if (!sb_rdonly(sb) && readonly) {
418: rc = dquot_suspend(sb, -1);
419: if (rc < 0)
420: return rc;
421: rc = jfs_umount_rw(sb);
422: JFS_SBI(sb)->flag = flag;
423: return rc;
424: }
425: if ((JFS_SBI(sb)->flag & JFS_NOINTEGRITY) != (flag & JFS_NOINTEGRITY)) {
426: if (!sb_rdonly(sb)) {
427: rc = jfs_umount_rw(sb);
428: if (rc)
429: return rc;
430:
431: JFS_SBI(sb)->flag = flag;
432: ret = jfs_mount_rw(sb, 1);
433: return ret;
434: }
435: }
436: JFS_SBI(sb)->flag = flag;
437:
438: return 0;
439: }
SourceFile:fs/jfs/super.c] |
| 133/2 | 2026/03/16 16:01 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 130571 output: 48 thoughts: 176 Thoughts: **Analyzing the JFS Reconfiguration Logic** Okay, so I'm digging into `jfs_reconfigure`. I see this check: `sb_rdonly(sb) && !readonly`. This means we're remounting the filesystem from read-only to read-write, right? And, *that's* when it calls `truncate_inode_pages` on the direct inode. Hmm. Why are we truncating pages during a read-only to read-write transition? That doesn't immediately make sense to me. If we were truly in read-only mode, there shouldn't *be* any dirty metapages to truncate! The filesystem should have been clean. Unless... wait, is this related to a different state? Perhaps we are remounting from read-write to read-only? If that is the case, it calls `jfs_umount_rw(sb)`... right, let's take a look at that function's implementation. |
| 134/2 | 2026/03/16 16:01 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_umount.c IncludeLines:true Name:jfs_umount_rw] Results: map[SourceCode: 121: int jfs_umount_rw(struct super_block *sb)
122: {
123: struct jfs_sb_info *sbi = JFS_SBI(sb);
124: struct jfs_log *log = sbi->log;
125:
126: if (!log)
127: return 0;
128:
129: /*
130: * close log:
131: *
132: * remove file system from log active file system list.
133: */
134: jfs_flush_journal(log, 2);
135:
136: /*
137: * Make sure all metadata makes it to disk
138: */
139: dbSync(sbi->ipbmap);
140: diSync(sbi->ipimap);
141:
142: /*
143: * Note that we have to do this even if sync_blockdev() will
144: * do exactly the same a few instructions later: We can't
145: * mark the superblock clean before everything is flushed to
146: * disk.
147: */
148: filemap_write_and_wait(sbi->direct_inode->i_mapping);
149:
150: updateSuper(sb, FM_CLEAN);
151:
152: return lmLogClose(sb);
153: }
SourceFile:fs/jfs/jfs_umount.c] |
| 135/2 | 2026/03/16 16:01 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 131470 output: 46 thoughts: 330 Thoughts: **Analyzing JFS Reconfiguration Logic**
Okay, let's break this down. My initial thought is about how `jfs_reconfigure` handles read-write to read-only transitions. It calls `jfs_umount_rw(sb)`, which then flushes the journal via `jfs_flush_journal(log, 2)`. That makes sense for data integrity. Then, later, when remounting back to read-write, I see `truncate_inode_pages(JFS_SBI(sb)->direct_inode->i_mapping, 0)` is called. Clearing the cache makes sense here, too.
But... my focus sharpens when considering the `JFS_NOINTEGRITY` flag. The code snippet says:
```c
if ((JFS_SBI(sb)->flag & JFS_NOINTEGRITY) != (flag & JFS_NOINTEGRITY)) {
if (!sb_rdonly(sb)) {
rc = jfs_umount_rw(sb);
if (rc)
return rc;
JFS_SBI(sb)->flag = flag;
ret = jfs_mount_rw(sb, 1);
return ret;
}
}
```
It looks like if `JFS_NOINTEGRITY` *changes* (regardless of read-only/read-write status), it *also* calls `jfs_umount_rw(sb)` followed by `jfs_mount_rw(sb, 1)`. I have to confirm this is indeed what is happening.
The potential connection is immediately apparent, *does* `jfs_mount_rw` have the potential to call `truncate_inode_pages`? I should examine the implementation of `jfs_mount_rw`. I need to understand whether this flag change might also trigger that cache truncation. It's a key question for understanding the implications of changing integrity settings. Time to dive deeper.
|
| 136/2 | 2026/03/16 16:01 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_mount.c IncludeLines:true Name:jfs_mount_rw] Results: map[SourceCode: 215: /*
216: * NAME: jfs_mount_rw(sb, remount)
217: *
218: * FUNCTION: Completes read-write mount, or remounts read-only volume
219: * as read-write
220: */
221: int jfs_mount_rw(struct super_block *sb, int remount)
222: {
223: struct jfs_sb_info *sbi = JFS_SBI(sb);
224: int rc;
225:
226: /*
227: * If we are re-mounting a previously read-only volume, we want to
228: * re-read the inode and block maps, since fsck.jfs may have updated
229: * them.
230: */
231: if (remount) {
232: if (chkSuper(sb) || (sbi->state != FM_CLEAN))
233: return -EINVAL;
234:
235: truncate_inode_pages(sbi->ipimap->i_mapping, 0);
236: truncate_inode_pages(sbi->ipbmap->i_mapping, 0);
237:
238: IWRITE_LOCK(sbi->ipimap, RDWRLOCK_IMAP);
239: diUnmount(sbi->ipimap, 1);
240: if ((rc = diMount(sbi->ipimap))) {
241: IWRITE_UNLOCK(sbi->ipimap);
242: jfs_err("jfs_mount_rw: diMount failed!");
243: return rc;
244: }
245: IWRITE_UNLOCK(sbi->ipimap);
246:
247: dbUnmount(sbi->ipbmap, 1);
248: if ((rc = dbMount(sbi->ipbmap))) {
249: jfs_err("jfs_mount_rw: dbMount failed!");
250: return rc;
251: }
252: }
253:
254: /*
255: * open/initialize log
256: */
257: if ((rc = lmLogOpen(sb)))
258: return rc;
259:
260: /*
261: * update file system superblock;
262: */
263: if ((rc = updateSuper(sb, FM_MOUNT))) {
264: jfs_err("jfs_mount: updateSuper failed w/rc = %d", rc);
265: lmLogClose(sb);
266: return rc;
267: }
268:
269: /*
270: * write MOUNT log record of the file system
271: */
272: logMOUNT(sb);
273:
274: return rc;
275: }
SourceFile:fs/jfs/jfs_mount.c] |
| 137/2 | 2026/03/16 16:01 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 133172 output: 44 thoughts: 298 Thoughts: **Analyzing Potential Metadata Inconsistency During Remount in JFS** Okay, so I'm looking at `jfs_mount_rw` and the remount logic. Specifically, these calls to `truncate_inode_pages` on `sbi->ipimap` and `sbi->ipbmap`. Right, I know those are special inodes, the I-node map and the block map respectively. Truncating their folios here... Hmm, that seems odd. We just called `jfs_umount_rw` which should have flushed the journal beforehand. So, ideally, there shouldn't be any dirty metapages associated with those inodes. But, I need to consider failure scenarios. What if `jfs_umount_rw` failed? The `jfs_reconfigure` function checks the return code (`if (rc) return rc;`), so a failed unmount should be caught and the remount aborted. Okay, that seems safe enough. But what if `jfs_umount_rw` *succeeded*, but `jfs_flush_journal` still left some dirty metapages lingering, maybe in the `tlock` list? Okay, `jfs_flush_journal` does wait for the journal's commit queue (`log->cqueue`) and the synchronization list (`log->synclist`) to be empty. That part is good, so it *should* be fully flushed. But I'm still not 100% comfortable. Let's dig deeper into the journal synchronization process. I need to make sure metapages are *completely* clear. Specifically, I should re-examine `txUnlock`. I recall that it's the one responsible for releasing transaction resources, right? I need to confirm the order of operations there. Did it remove entries from the `log->synclist` *before* actually calling `put_metapage(mp)`? That would be a problem. |
| 138/2 | 2026/03/16 16:02 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_txnmgr.c IncludeLines:true Name:txUnlock] Results: map[SourceCode: 895: /*
896: * NAME: txUnlock()
897: *
898: * FUNCTION: Initiates pageout of pages modified by tid in journalled
899: * objects and frees their lockwords.
900: */
901: static void txUnlock(struct tblock * tblk)
902: {
903: struct tlock *tlck;
904: struct linelock *linelock;
905: lid_t lid, next, llid, k;
906: struct metapage *mp;
907: struct jfs_log *log;
908: int difft, diffp;
909: unsigned long flags;
910:
911: jfs_info("txUnlock: tblk = 0x%p", tblk);
912: log = JFS_SBI(tblk->sb)->log;
913:
914: /*
915: * mark page under tlock homeok (its log has been written):
916: */
917: for (lid = tblk->next; lid; lid = next) {
918: tlck = lid_to_tlock(lid);
919: next = tlck->next;
920:
921: jfs_info("unlocking lid = %d, tlck = 0x%p", lid, tlck);
922:
923: /* unbind page from tlock */
924: if ((mp = tlck->mp) != NULL &&
925: (tlck->type & tlckBTROOT) == 0) {
926: assert(mp->xflag & COMMIT_PAGE);
927:
928: /* hold buffer
929: */
930: hold_metapage(mp);
931:
932: assert(mp->nohomeok > 0);
933: _metapage_homeok(mp);
934:
935: /* inherit younger/larger clsn */
936: LOGSYNC_LOCK(log, flags);
937: if (mp->clsn) {
938: logdiff(difft, tblk->clsn, log);
939: logdiff(diffp, mp->clsn, log);
940: if (difft > diffp)
941: mp->clsn = tblk->clsn;
942: } else
943: mp->clsn = tblk->clsn;
944: LOGSYNC_UNLOCK(log, flags);
945:
946: assert(!(tlck->flag & tlckFREEPAGE));
947:
948: put_metapage(mp);
949: }
950:
951: /* insert tlock, and linelock(s) of the tlock if any,
952: * at head of freelist
953: */
954: TXN_LOCK();
955:
956: llid = ((struct linelock *) & tlck->lock)->next;
957: while (llid) {
958: linelock = (struct linelock *) lid_to_tlock(llid);
959: k = linelock->next;
960: txLockFree(llid);
961: llid = k;
962: }
963: txLockFree(lid);
964:
965: TXN_UNLOCK();
966: }
967: tblk->next = tblk->last = 0;
968:
969: /*
970: * remove tblock from logsynclist
971: * (allocation map pages inherited lsn of tblk and
972: * has been inserted in logsync list at txUpdateMap())
973: */
974: if (tblk->lsn) {
975: LOGSYNC_LOCK(log, flags);
976: log->count--;
977: list_del(&tblk->synclist);
978: LOGSYNC_UNLOCK(log, flags);
979: }
980: }
SourceFile:fs/jfs/jfs_txnmgr.c] |
| 139/2 | 2026/03/16 16:02 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 135170 output: 44 thoughts: 449 Thoughts: **Analyzing the Metapage Flushing and Potential Crash in JFS** Okay, let's break this down. In `txUnlock`, I see `put_metapage(mp)` is called at line 948, and then `tblk` gets removed from `log->synclist` at line 977. So, that `put_metapage` call *precedes* the removal from the synclist. This is key. That means when `jfs_flush_journal` runs, it's going to be waiting for `log->synclist` to be empty, effectively waiting for `txUnlock` to complete, right? Consequently, `txUnlock` will have *finished* when `jfs_flush_journal` returns, meaning `release_metapage` has already happened, and thus `metapage_write_one` has also already run. So, that rules out a crash within `jfs_reconfigure` or `jfs_put_super`, doesn't it? Hold on. The crash trace shows `jfs_lazycommit` is running. If `jfs_lazycommit` is active, that means `txUnlock` is also running. But what if the folio is truncated by *something else* while this is going on? My mind immediately jumps to file deletion. When a file is deleted, `jfs_evict_inode` is called. That calls `truncate_inode_pages_final` on the inode's mapping. And if that file is a *directory*, its folios get truncated! Now, the question is, do directory folios have metapages? Yes, they do. `dtSearch` uses `DT_GETPAGE`, which calls `read_metapage` with an absolute value of 0. That's using `inode->i_mapping`, meaning directory folios *do* have metapages! So, when a directory is deleted, `jfs_evict_inode` is triggered, and we potentially have an issue with metapages being truncated. This requires further investigation! |
| 140/2 | 2026/03/16 16:02 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/inode.c IncludeLines:true Name:jfs_evict_inode] Results: map[SourceCode: 145: void jfs_evict_inode(struct inode *inode)
146: {
147: struct jfs_inode_info *ji = JFS_IP(inode);
148:
149: jfs_info("In jfs_evict_inode, inode = 0x%p", inode);
150:
151: if (!inode->i_nlink && !is_bad_inode(inode)) {
152: dquot_initialize(inode);
153:
154: truncate_inode_pages_final(&inode->i_data);
155: if (JFS_IP(inode)->fileset == FILESYSTEM_I) {
156: struct inode *ipimap = JFS_SBI(inode->i_sb)->ipimap;
157:
158: if (test_cflag(COMMIT_Freewmap, inode))
159: jfs_free_zero_link(inode);
160:
161: if (ipimap && JFS_IP(ipimap)->i_imap)
162: diFree(inode);
163:
164: /*
165: * Free the inode from the quota allocation.
166: */
167: dquot_free_inode(inode);
168: }
169: } else {
170: truncate_inode_pages_final(&inode->i_data);
171: }
172: clear_inode(inode);
173: dquot_drop(inode);
174:
175: BUG_ON(!list_empty(&ji->anon_inode_list));
176:
177: spin_lock_irq(&ji->ag_lock);
178: if (ji->active_ag != -1) {
179: struct bmap *bmap = JFS_SBI(inode->i_sb)->bmap;
180: atomic_dec(&bmap->db_active[ji->active_ag]);
181: ji->active_ag = -1;
182: }
183: spin_unlock_irq(&ji->ag_lock);
184: }
SourceFile:fs/jfs/inode.c] |
| 141/2 | 2026/03/16 16:02 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 136758 output: 44 thoughts: 485 Thoughts: **JFS Metapage Write Issue During File Deletion** Okay, so I've been tracing through the JFS code, and I think I've found a critical issue related to metapage handling during file deletion, and I need to document this for posterity. It all starts with `jfs_evict_inode`, specifically when `truncate_inode_pages_final(&inode->i_data)` is invoked. Now, `inode->i_data` is just a pointer to `inode->i_mapping`, which is the address space of the file. `truncate_inode_pages_final` is designed to clear all folios associated with the inode from the page cache. Fine, makes sense, a file is being deleted or truncated. Here's the problem: if there are dirty metapages associated with those folios, `metapage_invalidate_folio` gets called. This in turn triggers `metapage_release_folio`. Critically, if `metapage_release_folio` *fails* to free the metapage (due to a non-zero count, `nohomeok` set, or the `META_dirty` bit being set), the folio is still removed from the page cache. And this is where it falls apart, because the folio's `mapping` field becomes `NULL`. Later, `jfs_lazycommit` processes the transaction that, for example, contained the directory modification that deleted the file. It eventually calls `txUnlock`, which then calls `put_metapage(mp)`. `put_metapage` in turn calls `release_metapage(mp)`. Now, `release_metapage` checks if `test_bit(META_dirty, &mp->flag)` is set and if it *is* set, which can happen after the page is removed from the cache, it calls `metapage_write_one(folio)`. Here's the disaster: `metapage_write_one` calls `metapage_write_folio(folio, &wbc)`. And here's the kernel panic. `metapage_write_folio` dereferences `folio->mapping->host`. But, because `truncate_inode_pages_final` removed the folio from the page cache, `folio->mapping` is now `NULL`! Boom. The real head-scratcher is why `release_metapage` even *tries* to write a dirty metapage in a file that's being deleted! The filesystem blocks were freed at file deletion, the metapages should be discarded, right? `release_metapage` *doesn't* check the file's deletion status. It only checks that `META_dirty` bit. So it proceeds to write the metapage in the case described. Shouldn't the metapages be marked as `META_discard` if the file is deleted? I need to go back and check `dtDeleteUp` or `dtDelete` because those are the places where the file deletion actually happens. I bet there is an issue there. |
| 142/2 | 2026/03/16 16:02 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_dtree.c IncludeLines:true Name:dtDeleteUp] Results: map[SourceCode:2223: /*
2224: * dtDeleteUp()
2225: *
2226: * function:
2227: * free empty pages as propagating deletion up the tree
2228: *
2229: * parameter:
2230: *
2231: * return:
2232: */
2233: static int dtDeleteUp(tid_t tid, struct inode *ip,
2234: struct metapage * fmp, dtpage_t * fp, struct btstack * btstack)
2235: {
2236: int rc = 0;
2237: struct metapage *mp;
2238: dtpage_t *p;
2239: int index, nextindex;
2240: int xlen;
2241: struct btframe *parent;
2242: struct dt_lock *dtlck;
2243: struct tlock *tlck;
2244: struct lv *lv;
2245: struct pxd_lock *pxdlock;
2246: int i;
2247:
2248: /*
2249: * keep the root leaf page which has become empty
2250: */
2251: if (BT_IS_ROOT(fmp)) {
2252: /*
2253: * reset the root
2254: *
2255: * dtInitRoot() acquires txlock on the root
2256: */
2257: dtInitRoot(tid, ip, PARENT(ip));
2258:
2259: DT_PUTPAGE(fmp);
2260:
2261: return 0;
2262: }
2263:
2264: /*
2265: * free the non-root leaf page
2266: */
2267: /*
2268: * acquire a transaction lock on the page
2269: *
2270: * write FREEXTENT|NOREDOPAGE log record
2271: * N.B. linelock is overlaid as freed extent descriptor, and
2272: * the buffer page is freed;
2273: */
2274: tlck = txMaplock(tid, ip, tlckDTREE | tlckFREE);
2275: pxdlock = (struct pxd_lock *) & tlck->lock;
2276: pxdlock->flag = mlckFREEPXD;
2277: pxdlock->pxd = fp->header.self;
2278: pxdlock->index = 1;
2279:
2280: /* update sibling pointers */
2281: if ((rc = dtRelink(tid, ip, fp))) {
2282: BT_PUTPAGE(fmp);
2283: return rc;
2284: }
2285:
2286: xlen = lengthPXD(&fp->header.self);
2287:
2288: /* Free quota allocation. */
2289: dquot_free_block(ip, xlen);
2290:
2291: /* free/invalidate its buffer page */
2292: discard_metapage(fmp);
2293:
2294: /*
2295: * propagate page deletion up the directory tree
2296: *
2297: * If the delete from the parent page makes it empty,
2298: * continue all the way up the tree.
2299: * stop if the root page is reached (which is never deleted) or
2300: * if the entry deletion does not empty the page.
2301: */
2302: while ((parent = BT_POP(btstack)) != NULL) {
2303: /* pin the parent page <sp> */
2304: DT_GETPAGE(ip, parent->bn, mp, PSIZE, p, rc);
2305: if (rc)
2306: return rc;
2307:
2308: /*
2309: * free the extent of the child page deleted
2310: */
2311: index = parent->index;
2312:
2313: /*
2314: * delete the entry for the child page from parent
2315: */
2316: nextindex = p->header.nextindex;
2317:
2318: /*
2319: * the parent has the single entry being deleted:
2320: *
2321: * free the parent page which has become empty.
2322: */
2323: if (nextindex == 1) {
2324: /*
2325: * keep the root internal page which has become empty
2326: */
2327: if (p->header.flag & BT_ROOT) {
2328: /*
2329: * reset the root
2330: *
2331: * dtInitRoot() acquires txlock on the root
2332: */
2333: dtInitRoot(tid, ip, PARENT(ip));
2334:
2335: DT_PUTPAGE(mp);
2336:
2337: return 0;
2338: }
2339: /*
2340: * free the parent page
2341: */
2342: else {
2343: /*
2344: * acquire a transaction lock on the page
2345: *
2346: * write FREEXTENT|NOREDOPAGE log record
2347: */
2348: tlck =
2349: txMaplock(tid, ip,
2350: tlckDTREE | tlckFREE);
2351: pxdlock = (struct pxd_lock *) & tlck->lock;
2352: pxdlock->flag = mlckFREEPXD;
2353: pxdlock->pxd = p->header.self;
2354: pxdlock->index = 1;
2355:
2356: /* update sibling pointers */
2357: if ((rc = dtRelink(tid, ip, p))) {
2358: DT_PUTPAGE(mp);
2359: return rc;
2360: }
2361:
2362: xlen = lengthPXD(&p->header.self);
2363:
2364: /* Free quota allocation */
2365: dquot_free_block(ip, xlen);
2366:
2367: /* free/invalidate its buffer page */
2368: discard_metapage(mp);
2369:
2370: /* propagate up */
2371: continue;
2372: }
2373: }
2374:
2375: /*
2376: * the parent has other entries remaining:
2377: *
2378: * delete the router entry from the parent page.
2379: */
2380: BT_MARK_DIRTY(mp, ip);
2381: /*
2382: * acquire a transaction lock on the page
2383: *
2384: * action: router entry deletion
2385: */
2386: tlck = txLock(tid, ip, mp, tlckDTREE | tlckENTRY);
2387: dtlck = (struct dt_lock *) & tlck->lock;
2388:
2389: /* linelock header */
2390: if (dtlck->index >= dtlck->maxcnt)
2391: dtlck = (struct dt_lock *) txLinelock(dtlck);
2392: lv = & dtlck->lv[dtlck->index];
2393: lv->offset = 0;
2394: lv->length = 1;
2395: dtlck->index++;
2396:
2397: /* linelock stbl of non-root leaf page */
2398: if (!(p->header.flag & BT_ROOT)) {
2399: if (dtlck->index < dtlck->maxcnt)
2400: lv++;
2401: else {
2402: dtlck = (struct dt_lock *) txLinelock(dtlck);
2403: lv = & dtlck->lv[0];
2404: }
2405: i = index >> L2DTSLOTSIZE;
2406: lv->offset = p->header.stblindex + i;
2407: lv->length =
2408: ((p->header.nextindex - 1) >> L2DTSLOTSIZE) -
2409: i + 1;
2410: dtlck->index++;
2411: }
2412:
2413: /* free the router entry */
2414: dtDeleteEntry(p, index, &dtlck);
2415:
2416: /* reset key of new leftmost entry of level (for consistency) */
2417: if (index == 0 &&
2418: ((p->header.flag & BT_ROOT) || p->header.prev == 0))
2419: dtTruncateEntry(p, 0, &dtlck);
2420:
2421: /* unpin the parent page */
2422: DT_PUTPAGE(mp);
2423:
2424: /* exit propagation up */
2425: break;
2426: }
2427:
2428: if (!DO_INDEX(ip))
2429: ip->i_size -= PSIZE;
2430:
2431: return 0;
2432: }
SourceFile:fs/jfs/jfs_dtree.c] |
| 143/2 | 2026/03/16 16:02 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 140724 output: 42 thoughts: 532 Thoughts: **Delving into JFS Directory Deletion** Okay, so I'm digging into how JFS handles directory deletion, and I've hit a conceptual snag. Specifically, I'm trying to figure out the exact sequence of events and, crucially, when the directory's disk blocks are actually freed. The initial clue was `dtDeleteUp`, which calls `discard_metapage` when a page empties. But what about when the whole directory goes poof? The process starts with the inode eviction, triggered when the directory is deleted, and its corresponding directory inode is evicted. So the question is: does JFS proactively free all the directory pages before it throws away the inode? The flow seems to go through `jfs_rmdir`, which removes the directory entry from its parent and decrements the link count. Then it's off to `iput` which then leads to `jfs_evict_inode`. But hold on, it looks like `jfs_rmdir` *doesn't* actually free the pages directly. It seems like the page freeing happens when the inode is evicted. So, back to `jfs_evict_inode`... looking at that function again, I see `truncate_inode_pages_final` called if the link count is zero and the inode isn't bad. However, `truncate_inode_pages_final` just cleans out the folios from the page cache. That's *not* the same as freeing the actual disk blocks. So where do those blocks get released? The next candidate is `diFree(inode)`, which appears *after* the `truncate_inode_pages_final` call. However, it seems to be wrapped by a check that limits its use to inodes with `FILESYSTEM_I` or which have no links. And, `jfs_free_zero_link` is called if `FILESYSTEM_I` also. But `jfs_free_zero_link` looks like the wrong path. Yes! There's an `if` statement in `jfs_evict_inode` that seems promising: if the fileset is `FILESYSTEM_I`, there's a call to `diFree(inode)`. That makes sense. `diFree` looks like the likely candidate. So the real question is: Does `diFree` handle the freeing of directory pages, or is there another component at play that's responsible for the freeing of the data/directory blocks? Need to dive into `diFree`. |
| 144/2 | 2026/03/16 16:02 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_imap.c IncludeLines:true Name:diFree] Results: map[SourceCode: 807: /*
808: * NAME: diFree(ip)
809: *
810: * FUNCTION: free a specified inode from the inode working map
811: * for a fileset or aggregate.
812: *
813: * if the inode to be freed represents the first (only)
814: * free inode within the iag, the iag will be placed on
815: * the ag free inode list.
816: *
817: * freeing the inode will cause the inode extent to be
818: * freed if the inode is the only allocated inode within
819: * the extent. in this case all the disk resource backing
820: * up the inode extent will be freed. in addition, the iag
821: * will be placed on the ag extent free list if the extent
822: * is the first free extent in the iag. if freeing the
823: * extent also means that no free inodes will exist for
824: * the iag, the iag will also be removed from the ag free
825: * inode list.
826: *
827: * the iag describing the inode will be freed if the extent
828: * is to be freed and it is the only backed extent within
829: * the iag. in this case, the iag will be removed from the
830: * ag free extent list and ag free inode list and placed on
831: * the inode map's free iag list.
832: *
833: * a careful update approach is used to provide consistency
834: * in the face of updates to multiple buffers. under this
835: * approach, all required buffers are obtained before making
836: * any updates and are held until all updates are complete.
837: *
838: * PARAMETERS:
839: * ip - inode to be freed.
840: *
841: * RETURN VALUES:
842: * 0 - success
843: * -EIO - i/o error.
844: */
845: int diFree(struct inode *ip)
846: {
847: int rc;
848: ino_t inum = ip->i_ino;
849: struct iag *iagp, *aiagp, *biagp, *ciagp, *diagp;
850: struct metapage *mp, *amp, *bmp, *cmp, *dmp;
851: int iagno, ino, extno, bitno, sword, agno;
852: int back, fwd;
853: u32 bitmap, mask;
854: struct inode *ipimap = JFS_SBI(ip->i_sb)->ipimap;
855: struct inomap *imap = JFS_IP(ipimap)->i_imap;
856: pxd_t freepxd;
857: tid_t tid;
858: struct inode *iplist[3];
859: struct tlock *tlck;
860: struct pxd_lock *pxdlock;
861:
862: /*
863: * This is just to suppress compiler warnings. The same logic that
864: * references these variables is used to initialize them.
865: */
866: aiagp = biagp = ciagp = diagp = NULL;
867:
868: /* get the iag number containing the inode.
869: */
870: iagno = INOTOIAG(inum);
871:
872: /* make sure that the iag is contained within
873: * the map.
874: */
875: if (iagno >= imap->im_nextiag) {
876: print_hex_dump(KERN_ERR, "imap: ", DUMP_PREFIX_ADDRESS, 16, 4,
877: imap, 32, 0);
878: jfs_error(ip->i_sb, "inum = %d, iagno = %d, nextiag = %d\n",
879: (uint) inum, iagno, imap->im_nextiag);
880: return -EIO;
881: }
882:
883: /* get the allocation group for this ino.
884: */
885: agno = BLKTOAG(JFS_IP(ip)->agstart, JFS_SBI(ip->i_sb));
886:
887: /* Lock the AG specific inode map information
888: */
889: AG_LOCK(imap, agno);
890:
891: /* Obtain read lock in imap inode. Don't release it until we have
892: * read all of the IAG's that we are going to.
893: */
894: IREAD_LOCK(ipimap, RDWRLOCK_IMAP);
895:
896: /* read the iag.
897: */
898: if ((rc = diIAGRead(imap, iagno, &mp))) {
899: IREAD_UNLOCK(ipimap);
900: AG_UNLOCK(imap, agno);
901: return (rc);
902: }
903: iagp = (struct iag *) mp->data;
904:
905: /* get the inode number and extent number of the inode within
906: * the iag and the inode number within the extent.
907: */
908: ino = inum & (INOSPERIAG - 1);
909: extno = ino >> L2INOSPEREXT;
910: bitno = ino & (INOSPEREXT - 1);
911: mask = HIGHORDER >> bitno;
912:
913: if (!(le32_to_cpu(iagp->wmap[extno]) & mask)) {
914: jfs_error(ip->i_sb, "wmap shows inode already free\n");
915: }
916:
917: if (!addressPXD(&iagp->inoext[extno])) {
918: release_metapage(mp);
919: IREAD_UNLOCK(ipimap);
920: AG_UNLOCK(imap, agno);
921: jfs_error(ip->i_sb, "invalid inoext\n");
922: return -EIO;
923: }
924:
925: /* compute the bitmap for the extent reflecting the freed inode.
926: */
927: bitmap = le32_to_cpu(iagp->wmap[extno]) & ~mask;
928:
929: if (imap->im_agctl[agno].numfree > imap->im_agctl[agno].numinos) {
930: release_metapage(mp);
931: IREAD_UNLOCK(ipimap);
932: AG_UNLOCK(imap, agno);
933: jfs_error(ip->i_sb, "numfree > numinos\n");
934: return -EIO;
935: }
936: /*
937: * inode extent still has some inodes or below low water mark:
938: * keep the inode extent;
939: */
940: if (bitmap ||
941: imap->im_agctl[agno].numfree < 96 ||
942: (imap->im_agctl[agno].numfree < 288 &&
943: (((imap->im_agctl[agno].numfree * 100) /
944: imap->im_agctl[agno].numinos) <= 25))) {
945: /* if the iag currently has no free inodes (i.e.,
946: * the inode being freed is the first free inode of iag),
947: * insert the iag at head of the inode free list for the ag.
948: */
949: if (iagp->nfreeinos == 0) {
950: /* check if there are any iags on the ag inode
951: * free list. if so, read the first one so that
952: * we can link the current iag onto the list at
953: * the head.
954: */
955: if ((fwd = imap->im_agctl[agno].inofree) >= 0) {
956: /* read the iag that currently is the head
957: * of the list.
958: */
959: if ((rc = diIAGRead(imap, fwd, &))) {
960: IREAD_UNLOCK(ipimap);
961: AG_UNLOCK(imap, agno);
962: release_metapage(mp);
963: return (rc);
964: }
965: aiagp = (struct iag *) amp->data;
966:
967: /* make current head point back to the iag.
968: */
969: aiagp->inofreeback = cpu_to_le32(iagno);
970:
971: write_metapage(amp);
972: }
973:
974: /* iag points forward to current head and iag
975: * becomes the new head of the list.
976: */
977: iagp->inofreefwd =
978: cpu_to_le32(imap->im_agctl[agno].inofree);
979: iagp->inofreeback = cpu_to_le32(-1);
980: imap->im_agctl[agno].inofree = iagno;
981: }
982: IREAD_UNLOCK(ipimap);
983:
984: /* update the free inode summary map for the extent if
985: * freeing the inode means the extent will now have free
986: * inodes (i.e., the inode being freed is the first free
987: * inode of extent),
988: */
989: if (iagp->wmap[extno] == cpu_to_le32(ONES)) {
990: sword = extno >> L2EXTSPERSUM;
991: bitno = extno & (EXTSPERSUM - 1);
992: iagp->inosmap[sword] &=
993: cpu_to_le32(~(HIGHORDER >> bitno));
994: }
995:
996: /* update the bitmap.
997: */
998: iagp->wmap[extno] = cpu_to_le32(bitmap);
999:
1000: /* update the free inode counts at the iag, ag and
1001: * map level.
1002: */
1003: le32_add_cpu(&iagp->nfreeinos, 1);
1004: imap->im_agctl[agno].numfree += 1;
1005: atomic_inc(&imap->im_numfree);
1006:
1007: /* release the AG inode map lock
1008: */
1009: AG_UNLOCK(imap, agno);
1010:
1011: /* write the iag */
1012: write_metapage(mp);
1013:
1014: return (0);
1015: }
1016:
1017:
1018: /*
1019: * inode extent has become free and above low water mark:
1020: * free the inode extent;
1021: */
1022:
1023: /*
1024: * prepare to update iag list(s) (careful update step 1)
1025: */
1026: amp = bmp = cmp = dmp = NULL;
1027: fwd = back = -1;
1028:
1029: /* check if the iag currently has no free extents. if so,
1030: * it will be placed on the head of the ag extent free list.
1031: */
1032: if (iagp->nfreeexts == 0) {
1033: /* check if the ag extent free list has any iags.
1034: * if so, read the iag at the head of the list now.
1035: * this (head) iag will be updated later to reflect
1036: * the addition of the current iag at the head of
1037: * the list.
1038: */
1039: if ((fwd = imap->im_agctl[agno].extfree) >= 0) {
1040: if ((rc = diIAGRead(imap, fwd, &)))
1041: goto error_out;
1042: aiagp = (struct iag *) amp->data;
1043: }
1044: } else {
1045: /* iag has free extents. check if the addition of a free
1046: * extent will cause all extents to be free within this
1047: * iag. if so, the iag will be removed from the ag extent
1048: * free list and placed on the inode map's free iag list.
1049: */
1050: if (iagp->nfreeexts == cpu_to_le32(EXTSPERIAG - 1)) {
1051: /* in preparation for removing the iag from the
1052: * ag extent free list, read the iags preceding
1053: * and following the iag on the ag extent free
1054: * list.
1055: */
1056: if ((fwd = le32_to_cpu(iagp->extfreefwd)) >= 0) {
1057: if ((rc = diIAGRead(imap, fwd, &)))
1058: goto error_out;
1059: aiagp = (struct iag *) amp->data;
1060: }
1061:
1062: if ((back = le32_to_cpu(iagp->extfreeback)) >= 0) {
1063: if ((rc = diIAGRead(imap, back, &bmp)))
1064: goto error_out;
1065: biagp = (struct iag *) bmp->data;
1066: }
1067: }
1068: }
1069:
1070: /* remove the iag from the ag inode free list if freeing
1071: * this extent cause the iag to have no free inodes.
1072: */
1073: if (iagp->nfreeinos == cpu_to_le32(INOSPEREXT - 1)) {
1074: int inofreeback = le32_to_cpu(iagp->inofreeback);
1075: int inofreefwd = le32_to_cpu(iagp->inofreefwd);
1076:
1077: /* in preparation for removing the iag from the
1078: * ag inode free list, read the iags preceding
1079: * and following the iag on the ag inode free
1080: * list. before reading these iags, we must make
1081: * sure that we already don't have them in hand
1082: * from up above, since re-reading an iag (buffer)
1083: * we are currently holding would cause a deadlock.
1084: */
1085: if (inofreefwd >= 0) {
1086:
1087: if (inofreefwd == fwd)
1088: ciagp = (struct iag *) amp->data;
1089: else if (inofreefwd == back)
1090: ciagp = (struct iag *) bmp->data;
1091: else {
1092: if ((rc =
1093: diIAGRead(imap, inofreefwd, &cmp)))
1094: goto error_out;
1095: ciagp = (struct iag *) cmp->data;
1096: }
1097: assert(ciagp != NULL);
1098: }
1099:
1100: if (inofreeback >= 0) {
1101: if (inofreeback == fwd)
1102: diagp = (struct iag *) amp->data;
1103: else if (inofreeback == back)
1104: diagp = (struct iag *) bmp->data;
1105: else {
1106: if ((rc =
1107: diIAGRead(imap, inofreeback, &dmp)))
1108: goto error_out;
1109: diagp = (struct iag *) dmp->data;
1110: }
1111: assert(diagp != NULL);
1112: }
1113: }
1114:
1115: IREAD_UNLOCK(ipimap);
1116:
1117: /*
1118: * invalidate any page of the inode extent freed from buffer cache;
1119: */
1120: freepxd = iagp->inoext[extno];
1121: invalidate_pxd_metapages(ip, freepxd);
1122:
1123: /*
1124: * update iag list(s) (careful update step 2)
1125: */
1126: /* add the iag to the ag extent free list if this is the
1127: * first free extent for the iag.
1128: */
1129: if (iagp->nfreeexts == 0) {
1130: if (fwd >= 0)
1131: aiagp->extfreeback = cpu_to_le32(iagno);
1132:
1133: iagp->extfreefwd =
1134: cpu_to_le32(imap->im_agctl[agno].extfree);
1135: iagp->extfreeback = cpu_to_le32(-1);
1136: imap->im_agctl[agno].extfree = iagno;
1137: } else {
1138: /* remove the iag from the ag extent list if all extents
1139: * are now free and place it on the inode map iag free list.
1140: */
1141: if (iagp->nfreeexts == cpu_to_le32(EXTSPERIAG - 1)) {
1142: if (fwd >= 0)
1143: aiagp->extfreeback = iagp->extfreeback;
1144:
1145: if (back >= 0)
1146: biagp->extfreefwd = iagp->extfreefwd;
1147: else
1148: imap->im_agctl[agno].extfree =
1149: le32_to_cpu(iagp->extfreefwd);
1150:
1151: iagp->extfreefwd = iagp->extfreeback = cpu_to_le32(-1);
1152:
1153: IAGFREE_LOCK(imap);
1154: iagp->iagfree = cpu_to_le32(imap->im_freeiag);
1155: imap->im_freeiag = iagno;
1156: IAGFREE_UNLOCK(imap);
1157: }
1158: }
1159:
1160: /* remove the iag from the ag inode free list if freeing
1161: * this extent causes the iag to have no free inodes.
1162: */
1163: if (iagp->nfreeinos == cpu_to_le32(INOSPEREXT - 1)) {
1164: if ((int) le32_to_cpu(iagp->inofreefwd) >= 0)
1165: ciagp->inofreeback = iagp->inofreeback;
1166:
1167: if ((int) le32_to_cpu(iagp->inofreeback) >= 0)
1168: diagp->inofreefwd = iagp->inofreefwd;
1169: else
1170: imap->im_agctl[agno].inofree =
1171: le32_to_cpu(iagp->inofreefwd);
1172:
1173: iagp->inofreefwd = iagp->inofreeback = cpu_to_le32(-1);
1174: }
1175:
1176: /* update the inode extent address and working map
1177: * to reflect the free extent.
1178: * the permanent map should have been updated already
1179: * for the inode being freed.
1180: */
1181: if (iagp->pmap[extno] != 0) {
1182: jfs_error(ip->i_sb, "the pmap does not show inode free\n");
1183: }
1184: iagp->wmap[extno] = 0;
1185: PXDlength(&iagp->inoext[extno], 0);
1186: PXDaddress(&iagp->inoext[extno], 0);
1187:
1188: /* update the free extent and free inode summary maps
1189: * to reflect the freed extent.
1190: * the inode summary map is marked to indicate no inodes
1191: * available for the freed extent.
1192: */
1193: sword = extno >> L2EXTSPERSUM;
1194: bitno = extno & (EXTSPERSUM - 1);
1195: mask = HIGHORDER >> bitno;
1196: iagp->inosmap[sword] |= cpu_to_le32(mask);
1197: iagp->extsmap[sword] &= cpu_to_le32(~mask);
1198:
1199: /* update the number of free inodes and number of free extents
1200: * for the iag.
1201: */
1202: le32_add_cpu(&iagp->nfreeinos, -(INOSPEREXT - 1));
1203: le32_add_cpu(&iagp->nfreeexts, 1);
1204:
1205: /* update the number of free inodes and backed inodes
1206: * at the ag and inode map level.
1207: */
1208: imap->im_agctl[agno].numfree -= (INOSPEREXT - 1);
1209: imap->im_agctl[agno].numinos -= INOSPEREXT;
1210: atomic_sub(INOSPEREXT - 1, &imap->im_numfree);
1211: atomic_sub(INOSPEREXT, &imap->im_numinos);
1212:
1213: if (amp)
1214: write_metapage(amp);
1215: if (bmp)
1216: write_metapage(bmp);
1217: if (cmp)
1218: write_metapage(cmp);
1219: if (dmp)
1220: write_metapage(dmp);
1221:
1222: /*
1223: * start transaction to update block allocation map
1224: * for the inode extent freed;
1225: *
1226: * N.B. AG_LOCK is released and iag will be released below, and
1227: * other thread may allocate inode from/reusing the ixad freed
1228: * BUT with new/different backing inode extent from the extent
1229: * to be freed by the transaction;
1230: */
1231: tid = txBegin(ipimap->i_sb, COMMIT_FORCE);
1232: mutex_lock(&JFS_IP(ipimap)->commit_mutex);
1233:
1234: /* acquire tlock of the iag page of the freed ixad
1235: * to force the page NOHOMEOK (even though no data is
1236: * logged from the iag page) until NOREDOPAGE|FREEXTENT log
1237: * for the free of the extent is committed;
1238: * write FREEXTENT|NOREDOPAGE log record
1239: * N.B. linelock is overlaid as freed extent descriptor;
1240: */
1241: tlck = txLock(tid, ipimap, mp, tlckINODE | tlckFREE);
1242: pxdlock = (struct pxd_lock *) & tlck->lock;
1243: pxdlock->flag = mlckFREEPXD;
1244: pxdlock->pxd = freepxd;
1245: pxdlock->index = 1;
1246:
1247: write_metapage(mp);
1248:
1249: iplist[0] = ipimap;
1250:
1251: /*
1252: * logredo needs the IAG number and IAG extent index in order
1253: * to ensure that the IMap is consistent. The least disruptive
1254: * way to pass these values through to the transaction manager
1255: * is in the iplist array.
1256: *
1257: * It's not pretty, but it works.
1258: */
1259: iplist[1] = (struct inode *) (size_t)iagno;
1260: iplist[2] = (struct inode *) (size_t)extno;
1261:
1262: rc = txCommit(tid, 1, &iplist[0], COMMIT_FORCE);
1263:
1264: txEnd(tid);
1265: mutex_unlock(&JFS_IP(ipimap)->commit_mutex);
1266:
1267: /* unlock the AG inode map information */
1268: AG_UNLOCK(imap, agno);
1269:
1270: return (0);
1271:
1272: error_out:
1273: IREAD_UNLOCK(ipimap);
1274:
1275: if (amp)
1276: release_metapage(amp);
1277: if (bmp)
1278: release_metapage(bmp);
1279: if (cmp)
1280: release_metapage(cmp);
1281: if (dmp)
1282: release_metapage(dmp);
1283:
1284: AG_UNLOCK(imap, agno);
1285:
1286: release_metapage(mp);
1287:
1288: return (rc);
1289: }
SourceFile:fs/jfs/jfs_imap.c] |
| 145/2 | 2026/03/16 16:02 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 149595 output: 45 thoughts: 288 Thoughts: **My JFS File System Data Block Disambiguation** Okay, so let's break down how this JFS system handles freeing an inode and its associated data blocks. My understanding, as it's been, is that `diFree` is responsible for freeing the inode itself, from the inode map, but not the file's actual data blocks. Those blocks were supposed to be freed when the file was truncated, which makes sense. But then I saw that `jfs_evict_inode` function calls `truncate_inode_pages_final(&inode->i_data)`. This looked promising. But wait a second... it *doesn't* actually call `jfs_truncate` directly! This has me confused. I thought for sure `jfs_evict_inode` was the crucial step, that I was missing. Perhaps I'm wrong and it doesn't free the blocks at all? Then I realize, ah! `jfs_evict_inode` is triggered when `i_nlink` is zero. That's good. Which makes sense. But the crucial part is what happens *before* `jfs_evict_inode` gets called. `vfs_unlink` or `vfs_rmdir` calls `jfs_unlink` or `jfs_rmdir`, respectively. But those... those don't truncate the file either! They just remove the directory entry. So, no truncation. Okay, let's back up. When *does* the file get truncated, then? I remember `jfs_setattr` calls `jfs_truncate` when the file size (`i_size`) is changed. However, when a file is simply deleted, the size isn't modified. Then, there is something I've missed before! When `i_nlink` is zero, `jfs_evict_inode` calls `jfs_free_zero_link(inode)` *if* `test_cflag(COMMIT_Freewmap, inode)` is true. That has me hopeful. Perhaps this is the key? I need to check `jfs_free_zero_link` now. |
| 146/2 | 2026/03/16 16:03 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/inode.c IncludeLines:true Name:jfs_free_zero_link] Results: map[SourceCode: 680: /*
681: * NAME: jfs_free_zero_link()
682: *
683: * FUNCTION: for non-directory, called by iClose(),
684: * free resources of a file from cache and WORKING map
685: * for a file previously committed with zero link count
686: * while associated with a pager object,
687: *
688: * PARAMETER: ip - pointer to inode of file.
689: */
690: void jfs_free_zero_link(struct inode *ip)
691: {
692: int type;
693:
694: jfs_info("jfs_free_zero_link: ip = 0x%p", ip);
695:
696: /* return if not reg or symbolic link or if size is
697: * already ok.
698: */
699: type = ip->i_mode & S_IFMT;
700:
701: switch (type) {
702: case S_IFREG:
703: break;
704: case S_IFLNK:
705: /* if its contained in inode nothing to do */
706: if (ip->i_size < IDATASIZE)
707: return;
708: break;
709: default:
710: return;
711: }
712:
713: /*
714: * free EA
715: */
716: if (JFS_IP(ip)->ea.flag & DXD_EXTENT) {
717: s64 xaddr = addressDXD(&JFS_IP(ip)->ea);
718: int xlen = lengthDXD(&JFS_IP(ip)->ea);
719: struct maplock maplock; /* maplock for COMMIT_WMAP */
720: struct pxd_lock *pxdlock; /* maplock for COMMIT_WMAP */
721:
722: /* free EA pages from cache */
723: invalidate_dxd_metapages(ip, JFS_IP(ip)->ea);
724:
725: /* free EA extent from working block map */
726: maplock.index = 1;
727: pxdlock = (struct pxd_lock *) & maplock;
728: pxdlock->flag = mlckFREEPXD;
729: PXDaddress(&pxdlock->pxd, xaddr);
730: PXDlength(&pxdlock->pxd, xlen);
731: txFreeMap(ip, pxdlock, NULL, COMMIT_WMAP);
732: }
733:
734: /*
735: * free ACL
736: */
737: if (JFS_IP(ip)->acl.flag & DXD_EXTENT) {
738: s64 xaddr = addressDXD(&JFS_IP(ip)->acl);
739: int xlen = lengthDXD(&JFS_IP(ip)->acl);
740: struct maplock maplock; /* maplock for COMMIT_WMAP */
741: struct pxd_lock *pxdlock; /* maplock for COMMIT_WMAP */
742:
743: invalidate_dxd_metapages(ip, JFS_IP(ip)->acl);
744:
745: /* free ACL extent from working block map */
746: maplock.index = 1;
747: pxdlock = (struct pxd_lock *) & maplock;
748: pxdlock->flag = mlckFREEPXD;
749: PXDaddress(&pxdlock->pxd, xaddr);
750: PXDlength(&pxdlock->pxd, xlen);
751: txFreeMap(ip, pxdlock, NULL, COMMIT_WMAP);
752: }
753:
754: /*
755: * free xtree/data (truncate to zero length):
756: * free xtree/data pages from cache, and
757: * free xtree/data blocks from working block map;
758: */
759: if (ip->i_size)
760: xtTruncate(0, ip, 0, COMMIT_WMAP);
761: }
SourceFile:fs/jfs/namei.c] |
| 147/2 | 2026/03/16 16:03 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 151682 output: 43 thoughts: 112 Thoughts: **Delving into JFS Directory Removal** Okay, so `jfs_free_zero_link` is specifically for regular files and symbolic links, got it. My focus shifts: what about directories ( `S_IFDIR` )? Where does directory cleanup fit into this picture? When a directory gets removed, I initially thought `jfs_rmdir` might call `dtInitRoot` with the tid, inode pointer and parent's inode pointer. But after a moment's consideration, that doesn't quite sound right. No, the removal from the parent directory is handled within `jfs_rmdir`, probably through `dtDelete`. But that still leaves the directory itself! I need to trace the code to be sure. I'll need to check the `jfs_rmdir` function to see exactly how directories are freed. |
| 148/2 | 2026/03/16 16:03 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/namei.c IncludeLines:true Name:jfs_rmdir] Results: map[SourceCode: 315: /*
316: * NAME: jfs_rmdir(dip, dentry)
317: *
318: * FUNCTION: remove a link to child directory
319: *
320: * PARAMETER: dip - parent inode
321: * dentry - child directory dentry
322: *
323: * RETURN: -EINVAL - if name is . or ..
324: * -EINVAL - if . or .. exist but are invalid.
325: * errors from subroutines
326: *
327: * note:
328: * if other threads have the directory open when the last link
329: * is removed, the "." and ".." entries, if present, are removed before
330: * rmdir() returns and no new entries may be created in the directory,
331: * but the directory is not removed until the last reference to
332: * the directory is released (cf.unlink() of regular file).
333: */
334: static int jfs_rmdir(struct inode *dip, struct dentry *dentry)
335: {
336: int rc;
337: tid_t tid; /* transaction id */
338: struct inode *ip = d_inode(dentry);
339: ino_t ino;
340: struct component_name dname;
341: struct inode *iplist[2];
342: struct tblock *tblk;
343:
344: jfs_info("jfs_rmdir: dip:0x%p name:%pd", dip, dentry);
345:
346: /* Init inode for quota operations. */
347: rc = dquot_initialize(dip);
348: if (rc)
349: goto out;
350: rc = dquot_initialize(ip);
351: if (rc)
352: goto out;
353:
354: /* directory must be empty to be removed */
355: if (!dtEmpty(ip)) {
356: rc = -ENOTEMPTY;
357: goto out;
358: }
359:
360: if ((rc = get_UCSname(&dname, dentry))) {
361: goto out;
362: }
363:
364: tid = txBegin(dip->i_sb, 0);
365:
366: mutex_lock_nested(&JFS_IP(dip)->commit_mutex, COMMIT_MUTEX_PARENT);
367: mutex_lock_nested(&JFS_IP(ip)->commit_mutex, COMMIT_MUTEX_CHILD);
368:
369: iplist[0] = dip;
370: iplist[1] = ip;
371:
372: tblk = tid_to_tblock(tid);
373: tblk->xflag |= COMMIT_DELETE;
374: tblk->u.ip = ip;
375:
376: /*
377: * delete the entry of target directory from parent directory
378: */
379: ino = ip->i_ino;
380: if ((rc = dtDelete(tid, dip, &dname, &ino, JFS_REMOVE))) {
381: jfs_err("jfs_rmdir: dtDelete returned %d", rc);
382: if (rc == -EIO)
383: txAbort(tid, 1);
384: txEnd(tid);
385: mutex_unlock(&JFS_IP(ip)->commit_mutex);
386: mutex_unlock(&JFS_IP(dip)->commit_mutex);
387:
388: goto out2;
389: }
390:
391: /* update parent directory's link count corresponding
392: * to ".." entry of the target directory deleted
393: */
394: inode_set_mtime_to_ts(dip, inode_set_ctime_current(dip));
395: inode_dec_link_count(dip);
396:
397: /*
398: * OS/2 could have created EA and/or ACL
399: */
400: /* free EA from both persistent and working map */
401: if (JFS_IP(ip)->ea.flag & DXD_EXTENT) {
402: /* free EA pages */
403: txEA(tid, ip, &JFS_IP(ip)->ea, NULL);
404: }
405: JFS_IP(ip)->ea.flag = 0;
406:
407: /* free ACL from both persistent and working map */
408: if (JFS_IP(ip)->acl.flag & DXD_EXTENT) {
409: /* free ACL pages */
410: txEA(tid, ip, &JFS_IP(ip)->acl, NULL);
411: }
412: JFS_IP(ip)->acl.flag = 0;
413:
414: /* mark the target directory as deleted */
415: clear_nlink(ip);
416: mark_inode_dirty(ip);
417:
418: rc = txCommit(tid, 2, &iplist[0], 0);
419:
420: txEnd(tid);
421:
422: mutex_unlock(&JFS_IP(ip)->commit_mutex);
423: mutex_unlock(&JFS_IP(dip)->commit_mutex);
424:
425: /*
426: * Truncating the directory index table is not guaranteed. It
427: * may need to be done iteratively
428: */
429: if (test_cflag(COMMIT_Stale, dip)) {
430: if (dip->i_size > 1)
431: jfs_truncate_nolock(dip, 0);
432:
433: clear_cflag(COMMIT_Stale, dip);
434: }
435:
436: out2:
437: free_UCSname(&dname);
438:
439: out:
440: jfs_info("jfs_rmdir: rc:%d", rc);
441: return rc;
442: }
SourceFile:fs/jfs/namei.c] |
| 149/2 | 2026/03/16 16:03 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 153914 output: 44 thoughts: 472 Thoughts: **Analyzing the JFS Directory Removal Process**
Okay, let me think this through. `jfs_rmdir` calls `dtDelete` to remove the child directory's entry from the parent. That much is straightforward. But it *doesn't* free the actual directory pages of the child directory itself. Hold on a second... the child directory is *empty*, right? So, `dtEmpty(ip)` is true. That means, at most, it should only have the root page, which is inline within the inode, and potentially an index table. If it's empty... it has no external pages, right? Yes, if it's empty, no external pages exist.
Ah, but what if it *wasn't* empty? What if it *had* many files and those files got deleted first? When files are deleted, `dtDelete` gets called, and then `dtDeleteUp` which *does* free the empty pages. So, if the directory is empty now, all its external pages should *already* be freed.
Okay, let's circle back to the index table... Where is that stored? Right, it's stored in `ip->i_mapping`. Now, in the `jfs_rmdir` code, I see this:
```c
/*
* Truncating the directory index table is not guaranteed. It
* may need to be done iteratively
*/
if (test_cflag(COMMIT_Stale, dip)) {
if (dip->i_size > 1)
jfs_truncate_nolock(dip, 0);
clear_cflag(COMMIT_Stale, dip);
}
```
Wait, this code is acting on `dip`, the *parent* directory, not `ip`, the child directory. Okay, so what about the index table of the child directory? If the child is empty, does it *still* have an index table? Yes, it seems the index table is never shrunk.
So, `jfs_rmdir` *doesn't* free the child directory's index table. Where does that get handled? Does it happen when the inode is evicted? No, I need to check. `jfs_evict_inode`... it does *not* call `jfs_truncate`!
Wait a second... If `jfs_evict_inode` doesn't call `jfs_truncate`, then the index table blocks are potentially *leaked*? That's a serious problem. I need to re-examine `jfs_evict_inode` again.
|
| 150/2 | 2026/03/16 16:05 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/inode.c IncludeLines:true Name:jfs_evict_inode] Results: map[SourceCode: 145: void jfs_evict_inode(struct inode *inode)
146: {
147: struct jfs_inode_info *ji = JFS_IP(inode);
148:
149: jfs_info("In jfs_evict_inode, inode = 0x%p", inode);
150:
151: if (!inode->i_nlink && !is_bad_inode(inode)) {
152: dquot_initialize(inode);
153:
154: truncate_inode_pages_final(&inode->i_data);
155: if (JFS_IP(inode)->fileset == FILESYSTEM_I) {
156: struct inode *ipimap = JFS_SBI(inode->i_sb)->ipimap;
157:
158: if (test_cflag(COMMIT_Freewmap, inode))
159: jfs_free_zero_link(inode);
160:
161: if (ipimap && JFS_IP(ipimap)->i_imap)
162: diFree(inode);
163:
164: /*
165: * Free the inode from the quota allocation.
166: */
167: dquot_free_inode(inode);
168: }
169: } else {
170: truncate_inode_pages_final(&inode->i_data);
171: }
172: clear_inode(inode);
173: dquot_drop(inode);
174:
175: BUG_ON(!list_empty(&ji->anon_inode_list));
176:
177: spin_lock_irq(&ji->ag_lock);
178: if (ji->active_ag != -1) {
179: struct bmap *bmap = JFS_SBI(inode->i_sb)->bmap;
180: atomic_dec(&bmap->db_active[ji->active_ag]);
181: ji->active_ag = -1;
182: }
183: spin_unlock_irq(&ji->ag_lock);
184: }
SourceFile:fs/jfs/inode.c] |
| 151/2 | 2026/03/16 16:05 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 155644 output: 19 thoughts: 57 Thoughts: **Investigating the JFS Freewmap Logic** Okay, so `jfs_free_zero_link` gets called when `test_cflag(COMMIT_Freewmap, inode)` evaluates to true. My immediate question is: *when* is this `COMMIT_Freewmap` flag actually set? I need to understand the conditions that trigger this particular code path. I'll need to trace the setting of this flag to determine the sequence of events leading up to the `jfs_free_zero_link` call. Time to search the codebase for every instance of `COMMIT_Freewmap`. That should give me the answer. |
| 152/2 | 2026/03/16 16:05 | tool | grepper |
0mArgs:map[Expression:COMMIT_Freewmap] Results: map[Output:fs/jfs/inode.c=145=void jfs_evict_inode(struct inode *inode)
--
fs/jfs/inode.c-157-
fs/jfs/inode.c:158: if (test_cflag(COMMIT_Freewmap, inode))
fs/jfs/inode.c-159- jfs_free_zero_link(inode);
--
fs/jfs/jfs_incore.h=118=enum cflags {
--
fs/jfs/jfs_incore.h-120- COMMIT_Inlineea, /* commit inode inline EA */
fs/jfs/jfs_incore.h:121: COMMIT_Freewmap, /* free WMAP at iClose() */
fs/jfs/jfs_incore.h-122- COMMIT_Dirty, /* Inode is really dirty */
--
fs/jfs/namei.c=624=static s64 commitZeroLink(tid_t tid, struct inode *ip)
--
fs/jfs/namei.c-646-
fs/jfs/namei.c:647: set_cflag(COMMIT_Freewmap, ip);
fs/jfs/namei.c-648-
] |
| 153/2 | 2026/03/16 16:05 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 156188 output: 41 thoughts: 9 Thoughts: Okay, here's my attempt at summarizing the thought process, tailored for an expert audience, and written in the first person: **Evaluating the `commitZeroLink` Strategy** Alright, let's take a look at `commitZeroLink`. This presumably refers to some specific methodology or strategy. Given my experience, the first question that pops into my head is: *What's the context?* Is this related to version control, software deployment, or something else entirely? I need more information to properly assess its validity. My immediate checklist will consider: * **Implementation:** How is this `commitZeroLink` actually implemented? Is it a script, a tool, a process, a philosophical approach? Knowing its concrete form is critical. * **Purpose:** What problem is it trying to solve? Is it aiming for zero-downtime deployments, faster development cycles, improved traceability, or something else? Understanding the *why* is essential. * **Scope:** Does it apply to small projects, large enterprises, or a specific type of system architecture? The scale will dramatically impact its feasibility and effectiveness. * **Assumptions and Dependencies:** What are the underlying assumptions? Are there external dependencies? Does it require specific tooling, infrastructure, or a particular team structure? These are critical to determine if it will work for the project at hand. * **Risks and Trade-offs:** Every approach has trade-offs. What are the potential pitfalls of adopting this `commitZeroLink` strategy? What risks need to be considered? For instance, does it risk complexity, or introduce single points of failure? * **Alternatives:** What are the established practices, alternative approaches, and does this bring any new insights? Without more detail, I can only assess this abstractly. However, based on the name, I'd anticipate this might be related to minimizing commits or, more likely, linked to the deployment process. I need to know *what* it is before I can apply my experience and evaluate this in any meaningful way. |
| 154/2 | 2026/03/16 16:06 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/namei.c IncludeLines:true Name:commitZeroLink] Results: map[SourceCode: 600: /*
601: * NAME: commitZeroLink()
602: *
603: * FUNCTION: for non-directory, called by jfs_remove(),
604: * truncate a regular file, directory or symbolic
605: * link to zero length. return 0 if type is not
606: * one of these.
607: *
608: * if the file is currently associated with a VM segment
609: * only permanent disk and inode map resources are freed,
610: * and neither the inode nor indirect blocks are modified
611: * so that the resources can be later freed in the work
612: * map by ctrunc1.
613: * if there is no VM segment on entry, the resources are
614: * freed in both work and permanent map.
615: * (? for temporary file - memory object is cached even
616: * after no reference:
617: * reference count > 0 - )
618: *
619: * PARAMETERS: cd - pointer to commit data structure.
620: * current inode is the one to truncate.
621: *
622: * RETURN: Errors from subroutines
623: */
624: static s64 commitZeroLink(tid_t tid, struct inode *ip)
625: {
626: int filetype;
627: struct tblock *tblk;
628:
629: jfs_info("commitZeroLink: tid = %d, ip = 0x%p", tid, ip);
630:
631: filetype = ip->i_mode & S_IFMT;
632: switch (filetype) {
633: case S_IFREG:
634: break;
635: case S_IFLNK:
636: /* fast symbolic link */
637: if (ip->i_size < IDATASIZE) {
638: ip->i_size = 0;
639: return 0;
640: }
641: break;
642: default:
643: assert(filetype != S_IFDIR);
644: return 0;
645: }
646:
647: set_cflag(COMMIT_Freewmap, ip);
648:
649: /* mark transaction of block map update type */
650: tblk = tid_to_tblock(tid);
651: tblk->xflag |= COMMIT_PMAP;
652:
653: /*
654: * free EA
655: */
656: if (JFS_IP(ip)->ea.flag & DXD_EXTENT)
657: /* acquire maplock on EA to be freed from block map */
658: txEA(tid, ip, &JFS_IP(ip)->ea, NULL);
659:
660: /*
661: * free ACL
662: */
663: if (JFS_IP(ip)->acl.flag & DXD_EXTENT)
664: /* acquire maplock on EA to be freed from block map */
665: txEA(tid, ip, &JFS_IP(ip)->acl, NULL);
666:
667: /*
668: * free xtree/data (truncate to zero length):
669: * free xtree/data pages from cache if COMMIT_PWMAP,
670: * free xtree/data blocks from persistent block map, and
671: * free xtree/data blocks from working block map if COMMIT_PWMAP;
672: */
673: if (ip->i_size)
674: return xtTruncate_pmap(tid, ip, 0);
675:
676: return 0;
677: }
SourceFile:fs/jfs/namei.c] |
| 155/2 | 2026/03/16 16:06 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 157831 output: 43 thoughts: 422 Thoughts: **Investigating Directory Link Count and Page Release** Okay, so `commitZeroLink` is triggered by `jfs_unlink` for regular files and symlinks, got it. But for directories, `jfs_rmdir` handles things differently. It calls `dtDelete` to remove the entry, and then reduces the link count (`ip->i_nlink = 0`). Hang on... `jfs_rmdir` doesn't call `commitZeroLink`! That means `COMMIT_Freewmap` isn't being set for directories, so `jfs_evict_inode` isn't going to call `jfs_free_zero_link` for directories either! But how are the pages of an empty directory released then? Let me re-examine `jfs_rmdir`. Ah, here it is: if `COMMIT_Stale` is set, it calls `jfs_truncate_nolock(dip, 0)` on the *parent* directory. That handles the stale data in the parent, but I'm concerned about the *child* directory that is being removed. If the child directory is empty, which is implied by `dtEmpty(ip)`, it should have no external pages, correct? Wait... Does it have an index table? If `DO_INDEX(ip)` is true, it might! And that index table is stored within `ip->i_mapping`. So, when the directory is removed, is that index table freed? I need to check `dtEmpty` more closely. |
| 156/2 | 2026/03/16 16:08 | tool | codesearch-definition-source |
0mError:requested entity does not exist Args: map[ContextFile:fs/jfs/jfs_dtree.c IncludeLines:true Name:dtEmpty] Results: map[SourceCode: SourceFile:] |
| 157/2 | 2026/03/16 16:08 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 158644 output: 16 thoughts: 17 Thoughts: Okay, here's my summary, presented as a stream of thought: **Initial Investigation into `dtEmpty` Within the `fs/jfs/` Directory** Alright, time to get to work. First thought: I need to investigate something. The problem domain isn't fully clear yet, but the starting point is defined - I'm looking for anything related to `dtEmpty` within the file system, specifically the `fs/jfs/` directory. That's the obvious first step; I need to know *what* I'm searching for, and *where*. This probably means diving into source code, config files, or even documentation – depending on what `dtEmpty` *is*. Is it a function, a variable, a constant, or maybe a macro? Could be anything. My immediate goal is just to locate its use and see what kind of context I find. Let's start digging! |
| 158/2 | 2026/03/16 16:10 | tool | grepper |
0mArgs:map[Expression:dtEmpty] Results: map[Output:fs/jfs/jfs_dtree.c=2714=int jfs_readdir(struct file *file, struct dir_context *ctx)
--
fs/jfs/jfs_dtree.c-2769-
fs/jfs/jfs_dtree.c:2770: if (dtEmpty(ip) ||
fs/jfs/jfs_dtree.c-2771- (dir_index >= JFS_IP(ip)->next_index)) {
--
fs/jfs/jfs_dtree.c-2826- */
fs/jfs/jfs_dtree.c:2827: if (dtEmpty(ip)) {
fs/jfs/jfs_dtree.c-2828- ctx->pos = DIREND;
--
fs/jfs/jfs_dtree.c-2868-
fs/jfs/jfs_dtree.c:2869: if (dtEmpty(ip)) {
fs/jfs/jfs_dtree.c-2870- ctx->pos = DIREND;
--
fs/jfs/jfs_dtree.h=132=typedef union {
--
fs/jfs/jfs_dtree.h-153-
fs/jfs/jfs_dtree.h:154:#define dtEmpty(IP) (JFS_IP(IP)->i_dtroot.header.nextindex == 0)
fs/jfs/jfs_dtree.h-155-
--
fs/jfs/namei.c=334=static int jfs_rmdir(struct inode *dip, struct dentry *dentry)
--
fs/jfs/namei.c-354- /* directory must be empty to be removed */
fs/jfs/namei.c:355: if (!dtEmpty(ip)) {
fs/jfs/namei.c-356- rc = -ENOTEMPTY;
--
fs/jfs/namei.c=1069=static int jfs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
--
fs/jfs/namei.c-1138- if (new_ip) {
fs/jfs/namei.c:1139: if (!dtEmpty(new_ip)) {
fs/jfs/namei.c-1140- rc = -ENOTEMPTY;
] |
| 159/2 | 2026/03/16 16:10 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 159497 output: 44 thoughts: 863 Thoughts: **The Directory Index Table Leak** Okay, so I'm digging into this `dtEmpty` check, and it's pretty straightforward: if `nextindex` is zero, the directory is empty, except for those special entries `. `and `..`. No entries means no external pages, right? That's what I initially thought, but then I remembered the index table. `dtDeleteUp` should be responsible for cleaning up external pages when the directory is empty but does it take care of the index table? No, it only frees B-tree pages. The index table remains untouched. So if a directory has a huge number of files, its index table can grow, and those pages are stored externally, got it. Now, when `jfs_rmdir` runs to delete the directory, it *doesn't* free the index table. That's a huge oversight. Okay, how is the index table eventually freed? I know `jfs_evict_inode` is called, and that calls `truncate_inode_pages_final` on the inode's data. But does `jfs_evict_inode` call `jfs_truncate` to free the index table's blocks? No, it does not. Let me double-check `jfs_evict_inode`... aha! If the `COMMIT_Freewmap` flag is *not* set, then `jfs_free_zero_link` is *not* called. And if that isn't called, then the disk blocks of the index table *leak*. They're not freed! And the metapages for the index table aren't discarded! Now, the killer, the metapages that remain might still be dirty. If they are dirty, `metapage_release_folio` will return false. But then, `truncate_inode_pages_final` gets called and removes those folios from the page cache. Setting `folio->mapping` to NULL! Later `txUnlock` calls `put_metapage` and then `release_metapage`, which sees that the metapage is dirty, and calls `metapage_write_one`. This, in turn, calls `metapage_write_folio`, which dereferences `folio->mapping->host`...and `folio->mapping` is NULL. Boom! That perfectly explains the crash. A classic NULL pointer dereference. Let's verify that `COMMIT_Freewmap` is actually *not* set for directories. Looking at `jfs_rmdir`, yep, it's not set. Hold on...`jfs_rmdir` *does* set `tblk->xflag |= COMMIT_DELETE`. Does `txCommit` set `COMMIT_Freewmap` for `COMMIT_DELETE`? Let me go check `txCommit`. I have a feeling we've found the root cause here. This is a nasty leak! |
| 160/2 | 2026/03/16 16:11 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_txnmgr.c IncludeLines:true Name:txCommit] Results: map[SourceCode:1096: /*
1097: * NAME: txCommit()
1098: *
1099: * FUNCTION: commit the changes to the objects specified in
1100: * clist. For journalled segments only the
1101: * changes of the caller are committed, ie by tid.
1102: * for non-journalled segments the data are flushed to
1103: * disk and then the change to the disk inode and indirect
1104: * blocks committed (so blocks newly allocated to the
1105: * segment will be made a part of the segment atomically).
1106: *
1107: * all of the segments specified in clist must be in
1108: * one file system. no more than 6 segments are needed
1109: * to handle all unix svcs.
1110: *
1111: * if the i_nlink field (i.e. disk inode link count)
1112: * is zero, and the type of inode is a regular file or
1113: * directory, or symbolic link , the inode is truncated
1114: * to zero length. the truncation is committed but the
1115: * VM resources are unaffected until it is closed (see
1116: * iput and iclose).
1117: *
1118: * PARAMETER:
1119: *
1120: * RETURN:
1121: *
1122: * serialization:
1123: * on entry the inode lock on each segment is assumed
1124: * to be held.
1125: *
1126: * i/o error:
1127: */
1128: int txCommit(tid_t tid, /* transaction identifier */
1129: int nip, /* number of inodes to commit */
1130: struct inode **iplist, /* list of inode to commit */
1131: int flag)
1132: {
1133: int rc = 0;
1134: struct commit cd;
1135: struct jfs_log *log;
1136: struct tblock *tblk;
1137: struct lrd *lrd;
1138: struct inode *ip;
1139: struct jfs_inode_info *jfs_ip;
1140: int k, n;
1141: ino_t top;
1142: struct super_block *sb;
1143:
1144: jfs_info("txCommit, tid = %d, flag = %d", tid, flag);
1145: /* is read-only file system ? */
1146: if (isReadOnly(iplist[0])) {
1147: rc = -EROFS;
1148: goto TheEnd;
1149: }
1150:
1151: sb = cd.sb = iplist[0]->i_sb;
1152: cd.tid = tid;
1153:
1154: if (tid == 0)
1155: tid = txBegin(sb, 0);
1156: tblk = tid_to_tblock(tid);
1157:
1158: /*
1159: * initialize commit structure
1160: */
1161: log = JFS_SBI(sb)->log;
1162: cd.log = log;
1163:
1164: /* initialize log record descriptor in commit */
1165: lrd = &cd.lrd;
1166: lrd->logtid = cpu_to_le32(tblk->logtid);
1167: lrd->backchain = 0;
1168:
1169: tblk->xflag |= flag;
1170:
1171: if ((flag & (COMMIT_FORCE | COMMIT_SYNC)) == 0)
1172: tblk->xflag |= COMMIT_LAZY;
1173: /*
1174: * prepare non-journaled objects for commit
1175: *
1176: * flush data pages of non-journaled file
1177: * to prevent the file getting non-initialized disk blocks
1178: * in case of crash.
1179: * (new blocks - )
1180: */
1181: cd.iplist = iplist;
1182: cd.nip = nip;
1183:
1184: /*
1185: * acquire transaction lock on (on-disk) inodes
1186: *
1187: * update on-disk inode from in-memory inode
1188: * acquiring transaction locks for AFTER records
1189: * on the on-disk inode of file object
1190: *
1191: * sort the inodes array by inode number in descending order
1192: * to prevent deadlock when acquiring transaction lock
1193: * of on-disk inodes on multiple on-disk inode pages by
1194: * multiple concurrent transactions
1195: */
1196: for (k = 0; k < cd.nip; k++) {
1197: top = (cd.iplist[k])->i_ino;
1198: for (n = k + 1; n < cd.nip; n++) {
1199: ip = cd.iplist[n];
1200: if (ip->i_ino > top) {
1201: top = ip->i_ino;
1202: cd.iplist[n] = cd.iplist[k];
1203: cd.iplist[k] = ip;
1204: }
1205: }
1206:
1207: ip = cd.iplist[k];
1208: jfs_ip = JFS_IP(ip);
1209:
1210: /*
1211: * BUGBUG - This code has temporarily been removed. The
1212: * intent is to ensure that any file data is written before
1213: * the metadata is committed to the journal. This prevents
1214: * uninitialized data from appearing in a file after the
1215: * journal has been replayed. (The uninitialized data
1216: * could be sensitive data removed by another user.)
1217: *
1218: * The problem now is that we are holding the IWRITELOCK
1219: * on the inode, and calling filemap_fdatawrite on an
1220: * unmapped page will cause a deadlock in jfs_get_block.
1221: *
1222: * The long term solution is to pare down the use of
1223: * IWRITELOCK. We are currently holding it too long.
1224: * We could also be smarter about which data pages need
1225: * to be written before the transaction is committed and
1226: * when we don't need to worry about it at all.
1227: *
1228: * if ((!S_ISDIR(ip->i_mode))
1229: * && (tblk->flag & COMMIT_DELETE) == 0)
1230: * filemap_write_and_wait(ip->i_mapping);
1231: */
1232:
1233: /*
1234: * Mark inode as not dirty. It will still be on the dirty
1235: * inode list, but we'll know not to commit it again unless
1236: * it gets marked dirty again
1237: */
1238: clear_cflag(COMMIT_Dirty, ip);
1239:
1240: /* inherit anonymous tlock(s) of inode */
1241: if (jfs_ip->atlhead) {
1242: lid_to_tlock(jfs_ip->atltail)->next = tblk->next;
1243: tblk->next = jfs_ip->atlhead;
1244: if (!tblk->last)
1245: tblk->last = jfs_ip->atltail;
1246: jfs_ip->atlhead = jfs_ip->atltail = 0;
1247: TXN_LOCK();
1248: list_del_init(&jfs_ip->anon_inode_list);
1249: TXN_UNLOCK();
1250: }
1251:
1252: /*
1253: * acquire transaction lock on on-disk inode page
1254: * (become first tlock of the tblk's tlock list)
1255: */
1256: if (((rc = diWrite(tid, ip))))
1257: goto out;
1258: }
1259:
1260: /*
1261: * write log records from transaction locks
1262: *
1263: * txUpdateMap() resets XAD_NEW in XAD.
1264: */
1265: txLog(log, tblk, &cd);
1266:
1267: /*
1268: * Ensure that inode isn't reused before
1269: * lazy commit thread finishes processing
1270: */
1271: if (tblk->xflag & COMMIT_DELETE) {
1272: ihold(tblk->u.ip);
1273: /*
1274: * Avoid a rare deadlock
1275: *
1276: * If the inode is locked, we may be blocked in
1277: * jfs_commit_inode. If so, we don't want the
1278: * lazy_commit thread doing the last iput() on the inode
1279: * since that may block on the locked inode. Instead,
1280: * commit the transaction synchronously, so the last iput
1281: * will be done by the calling thread (or later)
1282: */
1283: /*
1284: * I believe this code is no longer needed. Splitting I_LOCK
1285: * into two bits, I_NEW and I_SYNC should prevent this
1286: * deadlock as well. But since I don't have a JFS testload
1287: * to verify this, only a trivial s/I_LOCK/I_SYNC/ was done.
1288: * Joern
1289: */
1290: if (inode_state_read_once(tblk->u.ip) & I_SYNC)
1291: tblk->xflag &= ~COMMIT_LAZY;
1292: }
1293:
1294: ASSERT((!(tblk->xflag & COMMIT_DELETE)) ||
1295: ((tblk->u.ip->i_nlink == 0) &&
1296: !test_cflag(COMMIT_Nolink, tblk->u.ip)));
1297:
1298: /*
1299: * write COMMIT log record
1300: */
1301: lrd->type = cpu_to_le16(LOG_COMMIT);
1302: lrd->length = 0;
1303: lmLog(log, tblk, lrd, NULL);
1304:
1305: lmGroupCommit(log, tblk);
1306:
1307: /*
1308: * - transaction is now committed -
1309: */
1310:
1311: /*
1312: * force pages in careful update
1313: * (imap addressing structure update)
1314: */
1315: if (flag & COMMIT_FORCE)
1316: txForce(tblk);
1317:
1318: /*
1319: * update allocation map.
1320: *
1321: * update inode allocation map and inode:
1322: * free pager lock on memory object of inode if any.
1323: * update block allocation map.
1324: *
1325: * txUpdateMap() resets XAD_NEW in XAD.
1326: */
1327: if (tblk->xflag & COMMIT_FORCE)
1328: txUpdateMap(tblk);
1329:
1330: /*
1331: * free transaction locks and pageout/free pages
1332: */
1333: txRelease(tblk);
1334:
1335: if ((tblk->flag & tblkGC_LAZY) == 0)
1336: txUnlock(tblk);
1337:
1338:
1339: /*
1340: * reset in-memory object state
1341: */
1342: for (k = 0; k < cd.nip; k++) {
1343: ip = cd.iplist[k];
1344: jfs_ip = JFS_IP(ip);
1345:
1346: /*
1347: * reset in-memory inode state
1348: */
1349: jfs_ip->bxflag = 0;
1350: jfs_ip->blid = 0;
1351: }
1352:
1353: out:
1354: if (rc != 0)
1355: txAbort(tid, 1);
1356:
1357: TheEnd:
1358: jfs_info("txCommit: tid = %d, returning %d", tid, rc);
1359: return rc;
1360: }
SourceFile:fs/jfs/jfs_txnmgr.c] |
| 161/2 | 2026/03/16 16:11 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 164865 output: 18 thoughts: 1421 Thoughts: **Root Cause Analysis of JFS Directory Deletion Crash**
Okay, let's break this down. My initial thought was that `txCommit` should be responsible for setting `COMMIT_Freewmap`, but it seems like it's not happening for directories. This immediately suggests a leak; index table blocks are likely being lost when directories are deleted. But, the more immediate issue is the crash. It seems when a directory is deleted, the index table's metapages aren't being discarded correctly. They remain attached, dirty, and in the transaction's lock list.
Here’s the sequence. The index table blocks, which reside in `ip->i_mapping`, are associated with folios. The critical issue arises when `jfs_evict_inode` is called. It correctly calls `truncate_inode_pages_final` on the directory inode (`inode->i_data`). This action removes the folios of the index table from the page cache. Importantly, this sets `folio->mapping` to NULL! However, the metapages remain dirty and associated with the folios!
When the transaction commits, either synchronously or lazily, `txUnlock` is called. `txUnlock` eventually calls `put_metapage(mp)`, which in turn calls `release_metapage(mp)`. Now, `release_metapage` sees the `META_dirty` flag set on the metapage. It attempts to write the page back via `metapage_write_one(folio)`. But! `metapage_write_folio` is called, and *this* is the crux of the problem. It dereferences `folio->mapping->host`, but `folio->mapping` is NULL because the folios were truncated and removed from the page cache during `jfs_evict_inode`. This causes the general protection fault – the crash!
So, the root cause is the attempt to write a folio that has been truncated, and whose `folio->mapping` is NULL! This shouldn't be happening. We're trying to write back a page that doesn’t exist in the page cache anymore.
The question then becomes: why does `release_metapage` even *try* to write a dirty metapage that's no longer in the page cache? The code flow in `release_metapage` doesn't account for `folio->mapping` being NULL.
```c
if (test_bit(META_dirty, &mp->flag)) {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
} else if (mp->lsn) /* discard_metapage doesn't remove it */
remove_from_logsync(mp);
```
The fix, in my mind, is straightforward. Before attempting to write the folio back in `release_metapage`, we need to check if `folio->mapping` is NULL. If it is, we *discard* the metapage by clearing `META_dirty` and possibly `META_sync` and move on. The modifications made to the metapage are no longer relevant since the file (directory) has been deleted (or truncated, in the case of a directory's index table).
```c
if (test_bit(META_dirty, &mp->flag)) {
if (!folio->mapping) {
clear_bit(META_dirty, &mp->flag);
/* maybe also clear META_sync? */
clear_bit(META_sync, &mp->flag);
} else {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
}
}
```
Now, the question is, is this check safe? Yes, the `folio_lock(folio)` call at the beginning of `release_metapage` provides the necessary protection, so it's safe to check `folio->mapping` here.
The other thing to note is that `jfs_rmdir` contains the following:
```c
/*
* Truncating the directory
|
| 162/2 | 2026/03/16 16:11 | tool | grepper |
0mArgs:map[Expression:COMMIT_Stale] Results: map[Output:fs/jfs/jfs_dtree.c=2524=void dtInitRoot(tid_t tid, struct inode *ip, u32 idotdot)
--
fs/jfs/jfs_dtree.c-2554- * additional truncation is needed. The
fs/jfs/jfs_dtree.c:2555: * COMMIT_Stale flag tells caller that we
fs/jfs/jfs_dtree.c-2556- * initiated the truncation.
--
fs/jfs/jfs_dtree.c-2558- xtTruncate(tid, ip, 0, COMMIT_PWMAP);
fs/jfs/jfs_dtree.c:2559: set_cflag(COMMIT_Stale, ip);
fs/jfs/jfs_dtree.c-2560-
--
fs/jfs/jfs_incore.h=118=enum cflags {
--
fs/jfs/jfs_incore.h-123- COMMIT_Dirtable, /* commit changes to di_dirtable */
fs/jfs/jfs_incore.h:124: COMMIT_Stale, /* data extent is no longer valid */
fs/jfs/jfs_incore.h-125- COMMIT_Synclist, /* metadata pages on group commit synclist */
--
fs/jfs/namei.c=334=static int jfs_rmdir(struct inode *dip, struct dentry *dentry)
--
fs/jfs/namei.c-428- */
fs/jfs/namei.c:429: if (test_cflag(COMMIT_Stale, dip)) {
fs/jfs/namei.c-430- if (dip->i_size > 1)
--
fs/jfs/namei.c-432-
fs/jfs/namei.c:433: clear_cflag(COMMIT_Stale, dip);
fs/jfs/namei.c-434- }
--
fs/jfs/namei.c=464=static int jfs_unlink(struct inode *dip, struct dentry *dentry)
--
fs/jfs/namei.c-585- */
fs/jfs/namei.c:586: if (test_cflag(COMMIT_Stale, dip)) {
fs/jfs/namei.c-587- if (dip->i_size > 1)
--
fs/jfs/namei.c-589-
fs/jfs/namei.c:590: clear_cflag(COMMIT_Stale, dip);
fs/jfs/namei.c-591- }
--
fs/jfs/namei.c=1069=static int jfs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
--
fs/jfs/namei.c-1333- */
fs/jfs/namei.c:1334: if (test_cflag(COMMIT_Stale, old_dir)) {
fs/jfs/namei.c-1335- if (old_dir->i_size > 1)
--
fs/jfs/namei.c-1337-
fs/jfs/namei.c:1338: clear_cflag(COMMIT_Stale, old_dir);
fs/jfs/namei.c-1339- }
] |
| 163/2 | 2026/03/16 16:11 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 168229 output: 47 thoughts: 1105 Thoughts: **The Root of the Problem: A Dereferenced Null Pointer** Okay, let's break this down. I see that `COMMIT_Stale` gets set in `dtInitRoot`. `dtInitRoot` is called in `dtDeleteUp` when a root page becomes empty. Hang on... `dtInitRoot` on the child directory means `COMMIT_Stale` is set on it! `jfs_rmdir` checks `COMMIT_Stale` on the parent directory, `dip`, but it checks `dip` because `dtDelete` was called on `dip` to remove the child directory. Right, if the parent directory becomes empty, `dtInitRoot` sets `COMMIT_Stale` on `dip`. But what about the child? It was already empty since `dtEmpty(ip)` was true. Aha! `dtInitRoot` was already called on the child when its last entry was deleted! And when `dtInitRoot` was called, `COMMIT_Stale` was set on the child! The caller of `dtDelete` (like `jfs_unlink`, `jfs_rmdir`, or `jfs_rename`) checked `COMMIT_Stale` on the child and called `jfs_truncate_nolock(ip, 0)`. So the child's index table was already truncated when it became empty. Wait a second, if it was truncated, why are there still dirty metapages? Ah, because `jfs_truncate_nolock` truncates the *file*, which removes the folios from the page cache. But those dirty metapages are probably still in the `tlock` list of the transaction that truncated it. Right, the transaction that truncated it, when it commits, calls `txUnlock`. `txUnlock` calls `put_metapage(mp)`, which calls `release_metapage(mp)`. `release_metapage` sees `META_dirty` and calls `metapage_write_one(folio)`, which calls `metapage_write_folio(folio, &wbc)`. And here's the kicker... `metapage_write_folio` dereferences `folio->mapping->host`, which is NULL! Yes, it all clicks! When a directory becomes empty, `dtInitRoot` gets called, which calls `xtTruncate(tid, ip, 0, COMMIT_PWMAP)`. Then the caller (e.g., `jfs_unlink`) calls `jfs_truncate_nolock(ip, 0)`. `jfs_truncate_nolock` calls `truncate_setsize(ip, 0)` and removes the folios. But those dirty index table metapages are still in the `tlock` list. When the transaction commits, `txUnlock` happens, and we hit the same sequence as above, leading to the NULL pointer dereference. But why are those metapages dirty in the first place? They were modified by the transaction *before* they were truncated. For instance, `dtDelete` modifies the index table, marking it dirty. Then `dtDeleteUp` calls `dtInitRoot`, which truncates the index table, so the folio is removed from the page cache. But the metapage is still dirty! When the transaction commits, `release_metapage` tries to write it back, but the folio's mapping is NULL because it's been truncated! Bingo! It's a classic use-after-free or NULL pointer dereference bug. The fix is straightforward. In `release_metapage`, check if `folio->mapping` is NULL. If it is, the folio has been truncated, and we shouldn't write it back. Actually, we should clear the dirty bit too. Perfect. Let me just double-check `release_metapage`... |
| 164/2 | 2026/03/16 16:12 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:release_metapage] Results: map[SourceCode: 865: void release_metapage(struct metapage * mp)
866: {
867: struct folio *folio = mp->folio;
868: jfs_info("release_metapage: mp = 0x%p, flag = 0x%lx", mp, mp->flag);
869:
870: folio_lock(folio);
871: unlock_metapage(mp);
872:
873: assert(mp->count);
874: if (--mp->count || mp->nohomeok) {
875: folio_unlock(folio);
876: folio_put(folio);
877: return;
878: }
879:
880: if (test_bit(META_dirty, &mp->flag)) {
881: folio_mark_dirty(folio);
882: if (test_bit(META_sync, &mp->flag)) {
883: clear_bit(META_sync, &mp->flag);
884: if (metapage_write_one(folio))
885: jfs_error(mp->sb, "metapage_write_one() failed\n");
886: folio_lock(folio);
887: }
888: } else if (mp->lsn) /* discard_metapage doesn't remove it */
889: remove_from_logsync(mp);
890:
891: /* Try to keep metapages from using up too much memory */
892: drop_metapage(folio, mp);
893:
894: folio_unlock(folio);
895: folio_put(folio);
896: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 165/2 | 2026/03/16 16:12 | llm | debugger |
1mModel:gemini-3.1-pro-preview Tokens: input: 170724 output: 49 thoughts: 518 Thoughts: **Analyzing Metapage Dirty Flag and Logsync Handling** Okay, let's break this down. My initial thought was that if `folio->mapping` is NULL, we shouldn't be calling `folio_mark_dirty` or `metapage_write_one`. But hold on, it seems `folio_mark_dirty` is safe with a NULL mapping, while `metapage_write_one` is the problem. Therefore, the code block provided seems appropriate. Now, considering the provided code snippet, I see it tests the `META_dirty` bit. If set, and if `folio->mapping` is valid, it marks the folio dirty, and if `META_sync` is also set, attempts to write the metapage. However, if `META_dirty` is set but `folio->mapping` is NULL, it clears both `META_dirty` and `META_sync`. My concern arises from the handling of `logsync`. If `mp->lsn` is set, `remove_from_logsync(mp)` is called. But wait, if `test_bit(META_dirty, &mp->flag)` is true, `remove_from_logsync` is *not* called! This suggests a potential issue. The crux of my concern is that if a metapage is dirty, but the `folio->mapping` is NULL, we clear the dirty bit but *don't* remove it from the logsync queue. This looks wrong to me. If the `lsn` is set, it *should* be removed from `logsync`, regardless of the dirty state. Furthermore, if `test_bit(META_dirty, &mp->flag)` is true, the code calls `metapage_write_one`. My next thought is, does `metapage_write_one` itself handle removing the metapage from `logsync`? I'm not sure, I'll need to double check the call stack. No, it looks like `metapage_write_folio` calls `remove_from_logsync`. That should be the point of investigation. |
| 166/2 | 2026/03/16 16:14 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:metapage_write_folio] Results: map[SourceCode: 423: static int metapage_write_folio(struct folio *folio,
424: struct writeback_control *wbc)
425: {
426: struct bio *bio = NULL;
427: int block_offset; /* block offset of mp within page */
428: struct inode *inode = folio->mapping->host;
429: int blocks_per_mp = JFS_SBI(inode->i_sb)->nbperpage;
430: int len;
431: int xlen;
432: struct metapage *mp;
433: int redirty = 0;
434: sector_t lblock;
435: int nr_underway = 0;
436: sector_t pblock;
437: sector_t next_block = 0;
438: sector_t page_start;
439: unsigned long bio_bytes = 0;
440: unsigned long bio_offset = 0;
441: int offset;
442: int bad_blocks = 0;
443:
444: page_start = folio_pos(folio) >> inode->i_blkbits;
445: BUG_ON(!folio_test_locked(folio));
446: BUG_ON(folio_test_writeback(folio));
447: folio_start_writeback(folio);
448:
449: for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
450: mp = folio_to_mp(folio, offset);
451:
452: if (!mp || !test_bit(META_dirty, &mp->flag))
453: continue;
454:
455: if (mp->nohomeok && !test_bit(META_forcewrite, &mp->flag)) {
456: redirty = 1;
457: /*
458: * Make sure this page isn't blocked indefinitely.
459: * If the journal isn't undergoing I/O, push it
460: */
461: if (mp->log && !(mp->log->cflag & logGC_PAGEOUT))
462: jfs_flush_journal(mp->log, 0);
463: continue;
464: }
465:
466: clear_bit(META_dirty, &mp->flag);
467: set_bit(META_io, &mp->flag);
468: block_offset = offset >> inode->i_blkbits;
469: lblock = page_start + block_offset;
470: if (bio) {
471: if (xlen && lblock == next_block) {
472: /* Contiguous, in memory & on disk */
473: len = min(xlen, blocks_per_mp);
474: xlen -= len;
475: bio_bytes += len << inode->i_blkbits;
476: continue;
477: }
478: /* Not contiguous */
479: bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
480: /*
481: * Increment counter before submitting i/o to keep
482: * count from hitting zero before we're through
483: */
484: inc_io(folio);
485: if (!bio->bi_iter.bi_size)
486: goto dump_bio;
487: submit_bio(bio);
488: nr_underway++;
489: bio = NULL;
490: } else
491: inc_io(folio);
492: xlen = (folio_size(folio) - offset) >> inode->i_blkbits;
493: pblock = metapage_get_blocks(inode, lblock, &xlen);
494: if (!pblock) {
495: printk(KERN_ERR "JFS: metapage_get_blocks failed\n");
496: /*
497: * We already called inc_io(), but can't cancel it
498: * with dec_io() until we're done with the page
499: */
500: bad_blocks++;
501: continue;
502: }
503: len = min(xlen, (int)JFS_SBI(inode->i_sb)->nbperpage);
504:
505: bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_WRITE, GFP_NOFS);
506: bio->bi_iter.bi_sector = pblock << (inode->i_blkbits - 9);
507: bio->bi_end_io = metapage_write_end_io;
508: bio->bi_private = folio;
509:
510: /* Don't call bio_add_page yet, we may add to this vec */
511: bio_offset = offset;
512: bio_bytes = len << inode->i_blkbits;
513:
514: xlen -= len;
515: next_block = lblock + len;
516: }
517: if (bio) {
518: bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
519: if (!bio->bi_iter.bi_size)
520: goto dump_bio;
521:
522: submit_bio(bio);
523: nr_underway++;
524: }
525: if (redirty)
526: folio_redirty_for_writepage(wbc, folio);
527:
528: folio_unlock(folio);
529:
530: if (bad_blocks)
531: goto err_out;
532:
533: if (nr_underway == 0)
534: folio_end_writeback(folio);
535:
536: return 0;
537: dump_bio:
538: print_hex_dump(KERN_ERR, "JFS: dump of bio: ", DUMP_PREFIX_ADDRESS, 16,
539: 4, bio, sizeof(*bio), 0);
540: bio_put(bio);
541: folio_unlock(folio);
542: dec_io(folio, BLK_STS_OK, last_write_complete);
543: err_out:
544: while (bad_blocks--)
545: dec_io(folio, BLK_STS_OK, last_write_complete);
546: return -EIO;
547: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 167/2 | 2026/03/16 16:14 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 173710 output: 47 thoughts: 876 Thoughts: **Refining the Metapage Handling Logic**
Okay, so I'm revisiting this metapage write/release logic. My initial thought was that `metapage_write_folio` directly calls `remove_from_logsync`, but that's not accurate. `remove_from_logsync` comes into play when the metapage is discarded or released and isn't dirty, that much I have clear in my head.
Now, let's consider the case where `folio->mapping` is NULL. This is key. If the mapping is null, the folio has effectively been truncated. In that case, I shouldn't try to write it back! Instead, I want to clear the dirty bits and call `remove_from_logsync` if it's already in the logsync list. So I was thinking that I'd simplify this.
I was considering this revised code, essentially merging the two existing blocks of code where the logic hinges on `folio->mapping` being non-NULL or NULL.
```c
if (folio->mapping) {
if (test_bit(META_dirty, &mp->flag)) {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
} else if (mp->lsn) /* discard_metapage doesn't remove it */
remove_from_logsync(mp);
} else {
clear_bit(META_dirty, &mp->flag);
clear_bit(META_sync, &mp->flag);
if (mp->lsn)
remove_from_logsync(mp);
}
```
Is this correct? Yeah, it seems sound. If `folio->mapping` is NULL, writing it back would lead to a crash because there is nothing to write to. I should just clear the dirty bits and remove it from `logsync`. No need to try and write.
Now, I need to be sure. I have to make sure there aren't any other places in `release_metapage` where `folio->mapping` could cause a problem. Nope, `release_metapage` only uses the folio for locking/unlocking, and calls `folio_mark_dirty` and `metapage_write_one`. The `metapage_write_one` calls `folio_clear_dirty_for_io`, which handles the `folio->mapping == NULL` case, but then calls `metapage_write_folio`, which would then crash.
I should check the different cases where `folio->mapping` can be NULL. If the folio is anonymous, but JFS metapages are always file-backed. so this case doesn't exist. Therefore `folio->mapping` is NULL only if the folio has been removed from the page cache. Truncated or invalidated.
Ah, and I also know that if `folio->mapping` is NULL, `drop_metapage` is also called at the end of `release_metapage`, so let me check that...
|
| 168/2 | 2026/03/16 16:14 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:drop_metapage] Results: map[SourceCode: 309: static inline void drop_metapage(struct folio *folio, struct metapage *mp)
310: {
311: if (mp->count || mp->nohomeok || test_bit(META_dirty, &mp->flag) ||
312: test_bit(META_io, &mp->flag))
313: return;
314: remove_metapage(folio, mp);
315: INCREMENT(mpStat.pagefree);
316: free_metapage(mp);
317: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 169/2 | 2026/03/16 16:14 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 175553 output: 47 thoughts: 93 Thoughts: **Analyzing the Metapage Dropping Process** Okay, so I'm looking at how `drop_metapage` functions. It seems like the critical condition for calling `remove_metapage` is a clean metapage - specifically, when `mp->count`, `mp->nohomeok`, `META_dirty`, and `META_io` are all false. That makes sense; we wouldn't want to remove a metapage if it's actively in use, related to home directory restrictions, marked as dirty, or involved in I/O. So far, so good. Now, I see a specific edge case. If `folio->mapping` is NULL, it implies that `META_dirty` was previously cleared. In that scenario, `drop_metapage` *will* call `remove_metapage`. This signals that we're dealing with a special clean-up situation where the mapping is already gone. Alright, that seems straightforward enough to analyze... Now, let's dive into the details of `remove_metapage` to get a full picture of the complete process and implications. I need to be sure this is working correctly. |
| 170/2 | 2026/03/16 16:14 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:remove_metapage] Results: map[SourceCode: 222: static inline void remove_metapage(struct folio *folio, struct metapage *mp)
223: {
224: folio_detach_private(folio);
225: kunmap(&folio->page);
226: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 171/2 | 2026/03/16 16:14 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 176050 output: 25 thoughts: 2173 Thoughts: **Metapage Handling During Truncation and the Quest to Avoid Leaks**
Okay, so here's the deal. I'm looking at this `remove_metapage` function, and it seems safe. It calls `folio_detach_private` and `kunmap`, even if the mapping is NULL. That makes sense; `folio_detach_private` should clear the `folio->private` and `PG_private` if the mapping is already gone. Then, `free_metapage` takes care of deallocating the memory. Perfect! The metapage is discarded and freed.
Now, I'm analyzing the `release_metapage` function and trying to understand a reported crash during directory deletion (specifically, after `truncate_setsize` and the folio's page cache entries have been removed).
It looks like the initial version of `release_metapage` has a potential issue. It checks `META_dirty`, marks the folio dirty, and attempts to write it back. If `META_sync` is also set, it calls `metapage_write_one`. The critical problem is if `folio->mapping` is NULL. `folio_mark_dirty` effectively does nothing, but `metapage_write_one` will dereference `folio->mapping` and crash.
So, the first thought is to insert a check for `folio->mapping` before calling `metapage_write_one`. That's important! Now I remember that a NULL mapping means the folio has been truncated. *However*, if we just skip the write and don't clear the `META_dirty` flag, we're still in trouble.
If the folio is truncated (mapping is NULL) and *dirty*, the current `release_metapage` would return early (without freeing the metapage), which would lead to a *memory leak*! This is because `drop_metapage` won't free the metapage if `META_dirty` is set and the folio is no longer in the page cache! And it will never get written back!
Therefore, the fix *must* include clearing the `META_dirty` flag when `folio->mapping` is NULL. We need to tell the system to discard the metapage since the associated data is gone. I propose this in the `release_metapage` function:
```c
if (folio->mapping) {
if (test_bit(META_dirty, &mp->flag)) {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
} else if (mp->lsn) /* discard_metapage doesn't remove it */
remove_from_logsync(mp);
} else {
clear_bit(META_dirty, &mp->flag);
clear_bit(META_sync, &mp->flag);
if (mp->lsn)
remove_from_logsync(mp);
}
```
Wait, before I call it done, let me double-check for all references to `folio->mapping`. And bingo! `metapage_write_folio` calls `folio->mapping->host`. Checking `folio->mapping` in `release_metapage` prevents a crash and fixes the leak. That's good!
Let's trace the crash again and build the explanation:
1. A directory gets deleted.
2. `jfs_rmdir` calls `dtDelete` to remove the entry.
3. The directory becomes empty, calling `dtInitRoot`.
4. `dtInitRoot` truncates the directory's index table via `xtTruncate`. This also sets `COMMIT_Stale` for the directory.
5. The caller (like `jfs_unlink` or `jfs_rmdir`) notices `COMMIT_Stale` and calls `jfs_truncate_nolock(ip, 0)`.
6. `jfs_truncate_nolock` uses `truncate_setsize`, which removes all folios from the page cache. This means all of their `folio->mapping` pointers are set to NULL.
7. The dirty metapages remain in the transaction's `tlock` list.
8. The transaction commits, calling `txUnlock`.
9. `txUnlock` calls `put_
|
| 172/2 | 2026/03/16 16:15 | tool | codesearch-find-references |
0mArgs:map[Name:metapage_write_one] Results: map[References:[map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:force_metapage SourceFile:fs/jfs/jfs_metapage.c SourceLine:840] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:release_metapage SourceFile:fs/jfs/jfs_metapage.c SourceLine:884]] TruncatedOutput:false] |
| 173/2 | 2026/03/16 16:15 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 179403 output: 47 thoughts: 20 Thoughts: Okay, here's the summary, rendered as a first-person thought process, aimed at an expert audience: **Examining the Interplay of Metapage Functions** Right, let me see... I'm tracing the execution flow, and I've just landed on `force_metapage`. Okay, so `force_metapage` is one of the key functions related to, well, forcing an update or write to a metapage, obviously. Now, I need to understand its relationship with other lower level functions. Ah! There it is. *That's* interesting... `force_metapage` *also* directly calls `metapage_write_one`. So, that's the core write function, the direct writer, invoked by the forced metapage update. This gives me a clearer picture of the architecture. I need to note this relationship and where I'm at. Now, I can move forward with my analysis. |
| 174/2 | 2026/03/16 16:15 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:force_metapage] Results: map[SourceCode: 831: void force_metapage(struct metapage *mp)
832: {
833: struct folio *folio = mp->folio;
834: jfs_info("force_metapage: mp = 0x%p", mp);
835: set_bit(META_forcewrite, &mp->flag);
836: clear_bit(META_sync, &mp->flag);
837: folio_get(folio);
838: folio_lock(folio);
839: folio_mark_dirty(folio);
840: if (metapage_write_one(folio))
841: jfs_error(mp->sb, "metapage_write_one() failed\n");
842: clear_bit(META_forcewrite, &mp->flag);
843: folio_put(folio);
844: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 175/2 | 2026/03/16 16:15 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 179952 output: 23 thoughts: 502 Thoughts: **Analyzing `force_metapage` and Potential Issues with Truncated Folios** Okay, so I'm looking at this `force_metapage` function, and it seems like there's a potential crash scenario if it's called on a truncated folio. That is, what happens if `force_metapage` is called when the folio is truncated? We need to make sure this function handles that situation correctly. My first thought is, should we explicitly check `folio->mapping` within `force_metapage`? Absolutely, it should check `folio->mapping`. If the folio has been truncated, `folio->mapping` will likely be NULL. The provided code does have a check for `folio->mapping`, but the logic needs more careful review. Okay, let's break it down. `force_metapage` gets a metapage, sets a bit, acquires the folio lock, checks the mapping. If the folio has a valid mapping, it marks the folio dirty, and calls `metapage_write_one`. *If* the mapping is NULL, it clears the dirty bit and unlocks the folio. This makes sense, but `metapage_write_one` unlocks the folio if it clears the dirty bit. Aha! Now I see a subtlety. *If* `folio->mapping` is NULL, `metapage_write_one` is *not* called. Therefore, we **must** explicitly unlock the folio ourselves to avoid blocking forever. Let's look more closely at the implications. `metapage_write_one` is what causes the unlock in some cases. `metapage_write_one` calls `folio_clear_dirty_for_io`, and, if the folio is dirty and needs to be written, will call `metapage_write_folio`, which will call `folio_wait_writeback` (if needed) and ultimately `folio_unlock`. However, if the folio is not dirty, `folio_clear_dirty_for_io` returns non-zero, and unlocks the folio. So, the key is this: if we *don't* call `metapage_write_one`, we *must* unlock the folio. The provided code does this! Okay, so the current check *is* correct. `folio_unlock(folio)` *is* needed when `folio->mapping` is NULL. I'll want to check the callers of `force_metapage` to make sure we're not missing any edge cases. |
| 176/2 | 2026/03/16 16:15 | tool | codesearch-find-references |
0mArgs:map[Name:force_metapage] Results: map[References:[map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:txForce SourceFile:fs/jfs/jfs_txnmgr.c SourceLine:2216]] TruncatedOutput:false] |
| 177/2 | 2026/03/16 16:15 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 181096 output: 44 thoughts: 256 Thoughts: **Analyzing the force_metapage Logic** Okay, so I see that `txForce` is calling `force_metapage`. This is interesting, let's break this down. First, it checks if `tlck->mp` (which I assume is a metapage pointer) is valid and if the `tlck` isn't a root node, which makes sense. Then, the code asserts that the `mp`'s `xflag` has the `COMMIT_PAGE` bit set, good. This confirms it is about pages that are being prepared for commiting. The conditional logic that follows is key. If the `tlck` has the `tlckFREEPAGE` flag set, then the `META_dirty` bit on the metapage is cleared and the `META_discard` bit is set. If the page had an LSN (Log Sequence Number), then it is removed from the `logsync` list. This indicates a page that's being *freed* or discarded during the commit process, meaning it will not be written to the log. Otherwise, if `tlckFREEPAGE` isn't set, the page is being made dirty for a write. It means that the page is being made "dirty" for a commit, as indicated by the fact that the page is added to the `logsync` list and its `lsn` is updated, the `META_dirty` and `META_sync` bits are both set. This puts the page on the sync list to be written to disk. The final `put_metapage(mp)` call suggests a reference count management or release mechanism for the metapage. My interpretation of this is that the code is intelligently handling the fate of metapages during a transaction commit. Pages marked for freeing are discarded, and those that *need* to be persisted are placed on a log sync list for the upcoming flush. This seems like a reasonable approach to efficiently manage the persistence requirements of committed transactions. |
| 178/2 | 2026/03/16 16:16 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_txnmgr.c IncludeLines:true Name:txForce] Results: map[SourceCode:2172: /*
2173: * txForce()
2174: *
2175: * function: synchronously write pages locked by transaction
2176: * after txLog() but before txUpdateMap();
2177: */
2178: static void txForce(struct tblock * tblk)
2179: {
2180: struct tlock *tlck;
2181: lid_t lid, next;
2182: struct metapage *mp;
2183:
2184: /*
2185: * reverse the order of transaction tlocks in
2186: * careful update order of address index pages
2187: * (right to left, bottom up)
2188: */
2189: tlck = lid_to_tlock(tblk->next);
2190: lid = tlck->next;
2191: tlck->next = 0;
2192: while (lid) {
2193: tlck = lid_to_tlock(lid);
2194: next = tlck->next;
2195: tlck->next = tblk->next;
2196: tblk->next = lid;
2197: lid = next;
2198: }
2199:
2200: /*
2201: * synchronously write the page, and
2202: * hold the page for txUpdateMap();
2203: */
2204: for (lid = tblk->next; lid; lid = next) {
2205: tlck = lid_to_tlock(lid);
2206: next = tlck->next;
2207:
2208: if ((mp = tlck->mp) != NULL &&
2209: (tlck->type & tlckBTROOT) == 0) {
2210: assert(mp->xflag & COMMIT_PAGE);
2211:
2212: if (tlck->flag & tlckWRITEPAGE) {
2213: tlck->flag &= ~tlckWRITEPAGE;
2214:
2215: /* do not release page to freelist */
2216: force_metapage(mp);
2217: #if 0
2218: /*
2219: * The "right" thing to do here is to
2220: * synchronously write the metadata.
2221: * With the current implementation this
2222: * is hard since write_metapage requires
2223: * us to kunmap & remap the page. If we
2224: * have tlocks pointing into the metadata
2225: * pages, we don't want to do this. I think
2226: * we can get by with synchronously writing
2227: * the pages when they are released.
2228: */
2229: assert(mp->nohomeok);
2230: set_bit(META_dirty, &mp->flag);
2231: set_bit(META_sync, &mp->flag);
2232: #endif
2233: }
2234: }
2235: }
2236: }
SourceFile:fs/jfs/jfs_txnmgr.c] |
| 179/2 | 2026/03/16 16:16 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 182741 output: 51 thoughts: 774 Thoughts: **My Analysis of the JFS Metapage Write Issue** Okay, so here's the breakdown of this JFS bug that I've been working through. My understanding is that when a directory is removed, we're encountering a subtle race condition that leads to a nasty NULL pointer dereference. Specifically, the problem occurs when the index table of a directory is truncated after the directory has been emptied and marked as stale. It starts with the standard `jfs_rmdir` operation. The directory entry is removed. If the directory is now empty, `dtInitRoot` kicks in and calls `xtTruncate`, which effectively truncates the directory's index table. This sets `folio->mapping` to NULL, meaning the folio is no longer associated with a valid mapping in the page cache. Meanwhile, these metapages stay dirty, and are added to a list for later writes. The problem lies in how these now-unmapped folios are handled during transaction commit. The `txUnlock` function, either synchronously or asynchronously, eventually calls `put_metapage`, which, in turn, calls `release_metapage`. *This* is where the issue surfaces. `release_metapage` checks if `META_dirty` is set and, finding it is, calls `metapage_write_one`. The critical part is that `metapage_write_one` then tries to dereference `folio->mapping->host`, even though `folio->mapping` has been set to NULL, leading to a kernel panic or GPF. The proposed fix makes perfect sense. In *both* `release_metapage` and `force_metapage`, we need to check if `folio->mapping` is NULL before attempting the write. If it *is* NULL, we know the folio has been truncated and removed. Instead of trying to write back a stale metapage, we should simply clear the `META_dirty` and `META_sync` flags, unlock the folio, and remove the metapage from the logsync list. This prevents the NULL pointer dereference, and also ensures that `drop_metapage` is able to free the metapage, avoiding a memory leak. So `txForce` is the path to solving this. This way, we handle the case where the folio has been unmapped gracefully, and our filesystem remains stable. It's a clean and efficient solution to a tricky race condition. |
| 180/2 | 2026/03/16 16:16 | tool | codeexpert |
5mArgs:map[Question:Is there any other place in fs/jfs/jfs_metapage.c where `folio->mapping` is dereferenced without checking if it's NULL?] Results: map[Answer:In `fs/jfs/jfs_metapage.c`, there are several places where `folio->mapping` is dereferenced without checking if it is `NULL`.
While `folio->mapping` is typically non-NULL for folios in the page cache, it can become `NULL` if a folio is truncated or evicted while an operation is in progress or pending. This is particularly risky in asynchronous callbacks or functions called during folio release.
The identified places are:
1. **`insert_metapage()`** (line 117, when `MPS_PER_PAGE > 1`):
```c
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
```
It dereferences `folio->mapping` to access the inode's block bits.
2. **`remove_metapage()`** (line 128, when `MPS_PER_PAGE > 1`):
```c
int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
```
Similar to `insert_metapage`, it dereferences `folio->mapping` without a check.
3. **`last_write_complete()`** (line 450):
```c
mapping_set_error(folio->mapping, err);
```
This is an I/O completion handler. If the folio was truncated while the write was in flight, `folio->mapping` could be `NULL`, causing `mapping_set_error` to crash when it attempts to set flags on the mapping.
4. **`metapage_write_folio()`** (line 476):
```c
struct inode *inode = folio->mapping->host;
```
This function is called during writeback. While usually safe when called via `writepages`, it is also called via `metapage_write_one()`, which can be triggered during metapage release.
5. **`metapage_read_folio()`** (line 571):
```c
struct inode *inode = folio->mapping->host;
```
As an `address_space_operation`, it expects a valid mapping, but it dereferences it immediately without a check.
6. **`metapage_write_one()`** (line 818 and 837):
```c
struct address_space *mapping = folio->mapping;
...
ret = filemap_check_errors(mapping);
```
If `folio->mapping` is `NULL`, `filemap_check_errors(NULL)` will result in a NULL pointer dereference.
Note that the dereferences in `insert_metapage` and `remove_metapage` only occur on architectures where `PAGE_SIZE` is greater than `PSIZE` (4096 bytes), such as some ARM64 or PowerPC configurations. On standard 4KB page architectures, these specific lines are bypassed by the `#if MPS_PER_PAGE > 1` block. However, the other locations are always active.] |
| 181/3 | 2026/03/16 16:16 | agent | codeexpert |
5mModel:gemini-3-flash-preview Instruction: You are a capable Linux kernel developer tasked with researching complex questions about kernel source code. You will be given a concrete question, and need to provide a concrete answer. Use tools extensively while researching the question. Don't make assumptions, or rely on your previous knowledge about the kernel source code, use available tools to access the actual source code. Use all available sources of information: - kernel source code - documentation in the Documentation dir in the source tree - git commits descriptions, git blame Prefer calling several tools at the same time to save round-trips. Prompt: Is there any other place in fs/jfs/jfs_metapage.c where `folio->mapping` is dereferenced without checking if it's NULL? Reply: In `fs/jfs/jfs_metapage.c`, there are several places where `folio->mapping` is dereferenced without checking if it is `NULL`.
While `folio->mapping` is typically non-NULL for folios in the page cache, it can become `NULL` if a folio is truncated or evicted while an operation is in progress or pending. This is particularly risky in asynchronous callbacks or functions called during folio release.
The identified places are:
1. **`insert_metapage()`** (line 117, when `MPS_PER_PAGE > 1`):
```c
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
```
It dereferences `folio->mapping` to access the inode's block bits.
2. **`remove_metapage()`** (line 128, when `MPS_PER_PAGE > 1`):
```c
int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
```
Similar to `insert_metapage`, it dereferences `folio->mapping` without a check.
3. **`last_write_complete()`** (line 450):
```c
mapping_set_error(folio->mapping, err);
```
This is an I/O completion handler. If the folio was truncated while the write was in flight, `folio->mapping` could be `NULL`, causing `mapping_set_error` to crash when it attempts to set flags on the mapping.
4. **`metapage_write_folio()`** (line 476):
```c
struct inode *inode = folio->mapping->host;
```
This function is called during writeback. While usually safe when called via `writepages`, it is also called via `metapage_write_one()`, which can be triggered during metapage release.
5. **`metapage_read_folio()`** (line 571):
```c
struct inode *inode = folio->mapping->host;
```
As an `address_space_operation`, it expects a valid mapping, but it dereferences it immediately without a check.
6. **`metapage_write_one()`** (line 818 and 837):
```c
struct address_space *mapping = folio->mapping;
...
ret = filemap_check_errors(mapping);
```
If `folio->mapping` is `NULL`, `filemap_check_errors(NULL)` will result in a NULL pointer dereference.
Note that the dereferences in `insert_metapage` and `remove_metapage` only occur on architectures where `PAGE_SIZE` is greater than `PSIZE` (4096 bytes), such as some ARM64 or PowerPC configurations. On standard 4KB page architectures, these specific lines are bypassed by the `#if MPS_PER_PAGE > 1` block. However, the other locations are always active. |
| 182/4 | 2026/03/16 16:16 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 2289 output: 28 thoughts: 124 Thoughts: **Analyzing Potential NULL Pointer Dereferences in jfs_metapage.c** Okay, so the user is concerned about potential NULL pointer dereferences of `folio->mapping` within `fs/jfs/jfs_metapage.c`. Right, that's a classic source of headaches, especially in kernel code. My immediate thought is to delve into the code and systematically investigate this. Here's my plan: 1. First, I'll need to either quickly list the contents of the file `fs/jfs/jfs_metapage.c`, or just do a search within that file for instances of `folio->mapping`. That should give me a good overview of the landscape. 2. Next, I'll meticulously go through *each* occurrence of `folio->mapping`, one by one. I need to be thorough here. 3. For *every* `folio->mapping` dereference I find, I'll need to critically evaluate the surrounding code. My primary concern? Is there a proper NULL check *before* the dereference? If there isn't, or if the check is weak or flawed, then we have a potential vulnerability. It's not enough to *assume* a check exists; I need to verify its effectiveness. I will look for other implicit checks or code structures that indirectly protect against NULL access, but I won't assume those protections exist without clear proof. Let's get this investigation underway. Time to get my hands dirty. |
| 183/4 | 2026/03/16 16:16 | tool | read-file |
0mArgs:map[File:fs/jfs/jfs_metapage.c] Results: map[Contents:// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright (C) International Business Machines Corp., 2000-2005
* Portions Copyright (C) Christoph Hellwig, 2001-2002
*/
#include <linux/blkdev.h>
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/bio.h>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/buffer_head.h>
#include <linux/mempool.h>
#include <linux/seq_file.h>
#include <linux/writeback.h>
#include <linux/migrate.h>
#include "jfs_incore.h"
#include "jfs_superblock.h"
#include "jfs_filsys.h"
#include "jfs_metapage.h"
#include "jfs_txnmgr.h"
#include "jfs_debug.h"
#ifdef CONFIG_JFS_STATISTICS
static struct {
uint pagealloc; /* # of page allocations */
uint pagefree; /* # of page frees */
uint lockwait; /* # of sleeping lock_metapage() calls */
} mpStat;
#endif
#define metapage_locked(mp) test_bit(META_locked, &(mp)->flag)
#define trylock_metapage(mp) test_and_set_bit_lock(META_locked, &(mp)->flag)
static inline void unlock_metapage(struct metapage *mp)
{
clear_bit_unlock(META_locked, &mp->flag);
wake_up(&mp->wait);
}
static inline void __lock_metapage(struct metapage *mp)
{
DECLARE_WAITQUEUE(wait, current);
INCREMENT(mpStat.lockwait);
add_wait_queue_exclusive(&mp->wait, &wait);
do {
set_current_state(TASK_UNINTERRUPTIBLE);
if (metapage_locked(mp)) {
folio_unlock(mp->folio);
io_schedule();
folio_lock(mp->folio);
}
} while (trylock_metapage(mp));
__set_current_state(TASK_RUNNING);
remove_wait_queue(&mp->wait, &wait);
}
/*
* Must have mp->folio locked
*/
static inline void lock_metapage(struct metapage *mp)
{
if (trylock_metapage(mp))
__lock_metapage(mp);
}
#define METAPOOL_MIN_PAGES 32
static struct kmem_cache *metapage_cache;
static mempool_t *metapage_mempool;
#define MPS_PER_PAGE (PAGE_SIZE >> L2PSIZE)
#if MPS_PER_PAGE > 1
struct meta_anchor {
int mp_count;
atomic_t io_count;
blk_status_t status;
struct metapage *mp[MPS_PER_PAGE];
};
static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
{
struct meta_anchor *anchor = folio->private;
if (!anchor)
return NULL;
return anchor->mp[offset >> L2PSIZE];
}
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a;
int index;
int l2mp_blocks; /* log2 blocks per metapage */
a = folio->private;
if (!a) {
a = kzalloc_obj(struct meta_anchor, GFP_NOFS);
if (!a)
return -ENOMEM;
folio_attach_private(folio, a);
kmap(&folio->page);
}
if (mp) {
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
a->mp_count++;
a->mp[index] = mp;
}
return 0;
}
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a = folio->private;
int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
int index;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
BUG_ON(a->mp[index] != mp);
a->mp[index] = NULL;
if (--a->mp_count == 0) {
kfree(a);
folio_detach_private(folio);
kunmap(&folio->page);
}
}
static inline void inc_io(struct folio *folio)
{
struct meta_anchor *anchor = folio->private;
atomic_inc(&anchor->io_count);
}
static inline void dec_io(struct folio *folio, blk_status_t status,
void (*handler)(struct folio *, blk_status_t))
{
struct meta_anchor *anchor = folio->private;
if (anchor->status == BLK_STS_OK)
anchor->status = status;
if (atomic_dec_and_test(&anchor->io_count))
handler(folio, anchor->status);
}
#ifdef CONFIG_MIGRATION
static int __metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
struct meta_anchor *src_anchor = src->private;
struct metapage *mps[MPS_PER_PAGE] = {0};
struct metapage *mp;
int i, rc;
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = src_anchor->mp[i];
if (mp && metapage_locked(mp))
return -EAGAIN;
}
rc = filemap_migrate_folio(mapping, dst, src, mode);
if (rc)
return rc;
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = src_anchor->mp[i];
if (!mp)
continue;
if (unlikely(insert_metapage(dst, mp))) {
/* If error, roll-back previosly inserted pages */
for (int j = 0 ; j < i; j++) {
if (mps[j])
remove_metapage(dst, mps[j]);
}
return -EAGAIN;
}
mps[i] = mp;
}
/* Update the metapage and remove it from src */
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = mps[i];
if (mp) {
int page_offset = mp->data - folio_address(src);
mp->data = folio_address(dst) + page_offset;
mp->folio = dst;
remove_metapage(src, mp);
}
}
return 0;
}
#endif /* CONFIG_MIGRATION */
#else
static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
{
return folio->private;
}
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
if (mp) {
folio_attach_private(folio, mp);
kmap(&folio->page);
}
return 0;
}
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
folio_detach_private(folio);
kunmap(&folio->page);
}
#define inc_io(folio) do {} while(0)
#define dec_io(folio, status, handler) handler(folio, status)
#ifdef CONFIG_MIGRATION
static int __metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
struct metapage *mp;
int page_offset;
int rc;
mp = folio_to_mp(src, 0);
if (metapage_locked(mp))
return -EAGAIN;
rc = filemap_migrate_folio(mapping, dst, src, mode);
if (rc)
return rc;
if (unlikely(insert_metapage(dst, mp)))
return -EAGAIN;
page_offset = mp->data - folio_address(src);
mp->data = folio_address(dst) + page_offset;
mp->folio = dst;
remove_metapage(src, mp);
return 0;
}
#endif /* CONFIG_MIGRATION */
#endif
static inline struct metapage *alloc_metapage(gfp_t gfp_mask)
{
struct metapage *mp = mempool_alloc(metapage_mempool, gfp_mask);
if (mp) {
mp->lid = 0;
mp->lsn = 0;
mp->data = NULL;
mp->clsn = 0;
mp->log = NULL;
init_waitqueue_head(&mp->wait);
}
return mp;
}
static inline void free_metapage(struct metapage *mp)
{
mempool_free(mp, metapage_mempool);
}
int __init metapage_init(void)
{
/*
* Allocate the metapage structures
*/
metapage_cache = kmem_cache_create("jfs_mp", sizeof(struct metapage),
0, 0, NULL);
if (metapage_cache == NULL)
return -ENOMEM;
metapage_mempool = mempool_create_slab_pool(METAPOOL_MIN_PAGES,
metapage_cache);
if (metapage_mempool == NULL) {
kmem_cache_destroy(metapage_cache);
return -ENOMEM;
}
return 0;
}
void metapage_exit(void)
{
mempool_destroy(metapage_mempool);
kmem_cache_destroy(metapage_cache);
}
static inline void drop_metapage(struct folio *folio, struct metapage *mp)
{
if (mp->count || mp->nohomeok || test_bit(META_dirty, &mp->flag) ||
test_bit(META_io, &mp->flag))
return;
remove_metapage(folio, mp);
INCREMENT(mpStat.pagefree);
free_metapage(mp);
}
/*
* Metapage address space operations
*/
static sector_t metapage_get_blocks(struct inode *inode, sector_t lblock,
int *len)
{
int rc = 0;
int xflag;
s64 xaddr;
sector_t file_blocks = (inode->i_size + inode->i_sb->s_blocksize - 1) >>
inode->i_blkbits;
if (lblock >= file_blocks)
return 0;
if (lblock + *len > file_blocks)
*len = file_blocks - lblock;
if (inode->i_ino) {
rc = xtLookup(inode, (s64)lblock, *len, &xflag, &xaddr, len, 0);
if ((rc == 0) && *len)
lblock = (sector_t)xaddr;
else
lblock = 0;
} /* else no mapping */
return lblock;
}
static void last_read_complete(struct folio *folio, blk_status_t status)
{
if (status)
printk(KERN_ERR "Read error %d at %#llx\n", status,
folio_pos(folio));
folio_end_read(folio, status == 0);
}
static void metapage_read_end_io(struct bio *bio)
{
struct folio *folio = bio->bi_private;
dec_io(folio, bio->bi_status, last_read_complete);
bio_put(bio);
}
static void remove_from_logsync(struct metapage *mp)
{
struct jfs_log *log = mp->log;
unsigned long flags;
/*
* This can race. Recheck that log hasn't been set to null, and after
* acquiring logsync lock, recheck lsn
*/
if (!log)
return;
LOGSYNC_LOCK(log, flags);
if (mp->lsn) {
mp->log = NULL;
mp->lsn = 0;
mp->clsn = 0;
log->count--;
list_del(&mp->synclist);
}
LOGSYNC_UNLOCK(log, flags);
}
static void last_write_complete(struct folio *folio, blk_status_t status)
{
struct metapage *mp;
unsigned int offset;
if (status) {
int err = blk_status_to_errno(status);
printk(KERN_ERR "metapage_write_end_io: I/O error\n");
mapping_set_error(folio->mapping, err);
}
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (mp && test_bit(META_io, &mp->flag)) {
if (mp->lsn)
remove_from_logsync(mp);
clear_bit(META_io, &mp->flag);
}
/*
* I'd like to call drop_metapage here, but I don't think it's
* safe unless I have the page locked
*/
}
folio_end_writeback(folio);
}
static void metapage_write_end_io(struct bio *bio)
{
struct folio *folio = bio->bi_private;
BUG_ON(!folio->private);
dec_io(folio, bio->bi_status, last_write_complete);
bio_put(bio);
}
static int metapage_write_folio(struct folio *folio,
struct writeback_control *wbc)
{
struct bio *bio = NULL;
int block_offset; /* block offset of mp within page */
struct inode *inode = folio->mapping->host;
int blocks_per_mp = JFS_SBI(inode->i_sb)->nbperpage;
int len;
int xlen;
struct metapage *mp;
int redirty = 0;
sector_t lblock;
int nr_underway = 0;
sector_t pblock;
sector_t next_block = 0;
sector_t page_start;
unsigned long bio_bytes = 0;
unsigned long bio_offset = 0;
int offset;
int bad_blocks = 0;
page_start = folio_pos(folio) >> inode->i_blkbits;
BUG_ON(!folio_test_locked(folio));
BUG_ON(folio_test_writeback(folio));
folio_start_writeback(folio);
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp || !test_bit(META_dirty, &mp->flag))
continue;
if (mp->nohomeok && !test_bit(META_forcewrite, &mp->flag)) {
redirty = 1;
/*
* Make sure this page isn't blocked indefinitely.
* If the journal isn't undergoing I/O, push it
*/
if (mp->log && !(mp->log->cflag & logGC_PAGEOUT))
jfs_flush_journal(mp->log, 0);
continue;
}
clear_bit(META_dirty, &mp->flag);
set_bit(META_io, &mp->flag);
block_offset = offset >> inode->i_blkbits;
lblock = page_start + block_offset;
if (bio) {
if (xlen && lblock == next_block) {
/* Contiguous, in memory & on disk */
len = min(xlen, blocks_per_mp);
xlen -= len;
bio_bytes += len << inode->i_blkbits;
continue;
}
/* Not contiguous */
bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
/*
* Increment counter before submitting i/o to keep
* count from hitting zero before we're through
*/
inc_io(folio);
if (!bio->bi_iter.bi_size)
goto dump_bio;
submit_bio(bio);
nr_underway++;
bio = NULL;
} else
inc_io(folio);
xlen = (folio_size(folio) - offset) >> inode->i_blkbits;
pblock = metapage_get_blocks(inode, lblock, &xlen);
if (!pblock) {
printk(KERN_ERR "JFS: metapage_get_blocks failed\n");
/*
* We already called inc_io(), but can't cancel it
* with dec_io() until we're done with the page
*/
bad_blocks++;
continue;
}
len = min(xlen, (int)JFS_SBI(inode->i_sb)->nbperpage);
bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_WRITE, GFP_NOFS);
bio->bi_iter.bi_sector = pblock << (inode->i_blkbits - 9);
bio->bi_end_io = metapage_write_end_io;
bio->bi_private = folio;
/* Don't call bio_add_page yet, we may add to this vec */
bio_offset = offset;
bio_bytes = len << inode->i_blkbits;
xlen -= len;
next_block = lblock + len;
}
if (bio) {
bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
if (!bio->bi_iter.bi_size)
goto dump_bio;
submit_bio(bio);
nr_underway++;
}
if (redirty)
folio_redirty_for_writepage(wbc, folio);
folio_unlock(folio);
if (bad_blocks)
goto err_out;
if (nr_underway == 0)
folio_end_writeback(folio);
return 0;
dump_bio:
print_hex_dump(KERN_ERR, "JFS: dump of bio: ", DUMP_PREFIX_ADDRESS, 16,
4, bio, sizeof(*bio), 0);
bio_put(bio);
folio_unlock(folio);
dec_io(folio, BLK_STS_OK, last_write_complete);
err_out:
while (bad_blocks--)
dec_io(folio, BLK_STS_OK, last_write_complete);
return -EIO;
}
static int metapage_writepages(struct address_space *mapping,
struct writeback_control *wbc)
{
struct blk_plug plug;
struct folio *folio = NULL;
int err;
blk_start_plug(&plug);
while ((folio = writeback_iter(mapping, wbc, folio, &err)))
err = metapage_write_folio(folio, wbc);
blk_finish_plug(&plug);
return err;
}
static int metapage_read_folio(struct file *fp, struct folio *folio)
{
struct inode *inode = folio->mapping->host;
struct bio *bio = NULL;
int block_offset;
int blocks_per_page = i_blocks_per_folio(inode, folio);
sector_t page_start; /* address of page in fs blocks */
sector_t pblock;
int xlen;
unsigned int len;
int offset;
BUG_ON(!folio_test_locked(folio));
page_start = folio_pos(folio) >> inode->i_blkbits;
block_offset = 0;
while (block_offset < blocks_per_page) {
xlen = blocks_per_page - block_offset;
pblock = metapage_get_blocks(inode, page_start + block_offset,
&xlen);
if (pblock) {
if (!folio->private)
insert_metapage(folio, NULL);
inc_io(folio);
if (bio)
submit_bio(bio);
bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_READ,
GFP_NOFS);
bio->bi_iter.bi_sector =
pblock << (inode->i_blkbits - 9);
bio->bi_end_io = metapage_read_end_io;
bio->bi_private = folio;
len = xlen << inode->i_blkbits;
offset = block_offset << inode->i_blkbits;
bio_add_folio_nofail(bio, folio, len, offset);
block_offset += xlen;
} else
block_offset++;
}
if (bio)
submit_bio(bio);
else
folio_unlock(folio);
return 0;
}
static bool metapage_release_folio(struct folio *folio, gfp_t gfp_mask)
{
struct metapage *mp;
bool ret = true;
int offset;
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp)
continue;
jfs_info("metapage_release_folio: mp = 0x%p", mp);
if (mp->count || mp->nohomeok ||
test_bit(META_dirty, &mp->flag)) {
jfs_info("count = %ld, nohomeok = %d", mp->count,
mp->nohomeok);
ret = false;
continue;
}
if (mp->lsn)
remove_from_logsync(mp);
remove_metapage(folio, mp);
INCREMENT(mpStat.pagefree);
free_metapage(mp);
}
return ret;
}
#ifdef CONFIG_MIGRATION
/*
* metapage_migrate_folio - Migration function for JFS metapages
*/
static int metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
int expected_count;
if (!src->private)
return filemap_migrate_folio(mapping, dst, src, mode);
/* Check whether page does not have extra refs before we do more work */
expected_count = folio_expected_ref_count(src) + 1;
if (folio_ref_count(src) != expected_count)
return -EAGAIN;
return __metapage_migrate_folio(mapping, dst, src, mode);
}
#else
#define metapage_migrate_folio NULL
#endif /* CONFIG_MIGRATION */
static void metapage_invalidate_folio(struct folio *folio, size_t offset,
size_t length)
{
BUG_ON(offset || length < folio_size(folio));
BUG_ON(folio_test_writeback(folio));
metapage_release_folio(folio, 0);
}
const struct address_space_operations jfs_metapage_aops = {
.read_folio = metapage_read_folio,
.writepages = metapage_writepages,
.release_folio = metapage_release_folio,
.invalidate_folio = metapage_invalidate_folio,
.dirty_folio = filemap_dirty_folio,
.migrate_folio = metapage_migrate_folio,
};
struct metapage *__get_metapage(struct inode *inode, unsigned long lblock,
unsigned int size, int absolute,
unsigned long new)
{
int l2BlocksPerPage;
int l2bsize;
struct address_space *mapping;
struct metapage *mp = NULL;
struct folio *folio;
unsigned long page_index;
unsigned long page_offset;
jfs_info("__get_metapage: ino = %ld, lblock = 0x%lx, abs=%d",
inode->i_ino, lblock, absolute);
l2bsize = inode->i_blkbits;
l2BlocksPerPage = PAGE_SHIFT - l2bsize;
page_index = lblock >> l2BlocksPerPage;
page_offset = (lblock - (page_index << l2BlocksPerPage)) << l2bsize;
if ((page_offset + size) > PAGE_SIZE) {
jfs_err("MetaData crosses page boundary!!");
jfs_err("lblock = %lx, size = %d", lblock, size);
dump_stack();
return NULL;
}
if (absolute)
mapping = JFS_SBI(inode->i_sb)->direct_inode->i_mapping;
else {
/*
* If an nfs client tries to read an inode that is larger
* than any existing inodes, we may try to read past the
* end of the inode map
*/
if ((lblock << inode->i_blkbits) >= inode->i_size)
return NULL;
mapping = inode->i_mapping;
}
if (new && (PSIZE == PAGE_SIZE)) {
folio = filemap_grab_folio(mapping, page_index);
if (IS_ERR(folio)) {
jfs_err("filemap_grab_folio failed!");
return NULL;
}
folio_mark_uptodate(folio);
} else {
folio = read_mapping_folio(mapping, page_index, NULL);
if (IS_ERR(folio)) {
jfs_err("read_mapping_page failed!");
return NULL;
}
folio_lock(folio);
}
mp = folio_to_mp(folio, page_offset);
if (mp) {
if (mp->logical_size != size) {
jfs_error(inode->i_sb,
"get_mp->logical_size != size\n");
jfs_err("logical_size = %d, size = %d",
mp->logical_size, size);
dump_stack();
goto unlock;
}
mp->count++;
lock_metapage(mp);
if (test_bit(META_discard, &mp->flag)) {
if (!new) {
jfs_error(inode->i_sb,
"using a discarded metapage\n");
discard_metapage(mp);
goto unlock;
}
clear_bit(META_discard, &mp->flag);
}
} else {
INCREMENT(mpStat.pagealloc);
mp = alloc_metapage(GFP_NOFS);
if (!mp)
goto unlock;
mp->folio = folio;
mp->sb = inode->i_sb;
mp->flag = 0;
mp->xflag = COMMIT_PAGE;
mp->count = 1;
mp->nohomeok = 0;
mp->logical_size = size;
mp->data = folio_address(folio) + page_offset;
mp->index = lblock;
if (unlikely(insert_metapage(folio, mp))) {
free_metapage(mp);
goto unlock;
}
lock_metapage(mp);
}
if (new) {
jfs_info("zeroing mp = 0x%p", mp);
memset(mp->data, 0, PSIZE);
}
folio_unlock(folio);
jfs_info("__get_metapage: returning = 0x%p data = 0x%p", mp, mp->data);
return mp;
unlock:
folio_unlock(folio);
return NULL;
}
void grab_metapage(struct metapage * mp)
{
jfs_info("grab_metapage: mp = 0x%p", mp);
folio_get(mp->folio);
folio_lock(mp->folio);
mp->count++;
lock_metapage(mp);
folio_unlock(mp->folio);
}
static int metapage_write_one(struct folio *folio)
{
struct address_space *mapping = folio->mapping;
struct writeback_control wbc = {
.sync_mode = WB_SYNC_ALL,
.nr_to_write = folio_nr_pages(folio),
};
int ret = 0;
BUG_ON(!folio_test_locked(folio));
folio_wait_writeback(folio);
if (folio_clear_dirty_for_io(folio)) {
folio_get(folio);
ret = metapage_write_folio(folio, &wbc);
if (ret == 0)
folio_wait_writeback(folio);
folio_put(folio);
} else {
folio_unlock(folio);
}
if (!ret)
ret = filemap_check_errors(mapping);
return ret;
}
void force_metapage(struct metapage *mp)
{
struct folio *folio = mp->folio;
jfs_info("force_metapage: mp = 0x%p", mp);
set_bit(META_forcewrite, &mp->flag);
clear_bit(META_sync, &mp->flag);
folio_get(folio);
folio_lock(folio);
folio_mark_dirty(folio);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
clear_bit(META_forcewrite, &mp->flag);
folio_put(folio);
}
void hold_metapage(struct metapage *mp)
{
folio_lock(mp->folio);
}
void put_metapage(struct metapage *mp)
{
if (mp->count || mp->nohomeok) {
/* Someone else will release this */
folio_unlock(mp->folio);
return;
}
folio_get(mp->folio);
mp->count++;
lock_metapage(mp);
folio_unlock(mp->folio);
release_metapage(mp);
}
void release_metapage(struct metapage * mp)
{
struct folio *folio = mp->folio;
jfs_info("release_metapage: mp = 0x%p, flag = 0x%lx", mp, mp->flag);
folio_lock(folio);
unlock_metapage(mp);
assert(mp->count);
if (--mp->count || mp->nohomeok) {
folio_unlock(folio);
folio_put(folio);
return;
}
if (test_bit(META_dirty, &mp->flag)) {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
} else if (mp->lsn) /* discard_metapage doesn't remove it */
remove_from_logsync(mp);
/* Try to keep metapages from using up too much memory */
drop_metapage(folio, mp);
folio_unlock(folio);
folio_put(folio);
}
void __invalidate_metapages(struct inode *ip, s64 addr, int len)
{
sector_t lblock;
int l2BlocksPerPage = PAGE_SHIFT - ip->i_blkbits;
int BlocksPerPage = 1 << l2BlocksPerPage;
/* All callers are interested in block device's mapping */
struct address_space *mapping =
JFS_SBI(ip->i_sb)->direct_inode->i_mapping;
struct metapage *mp;
unsigned int offset;
/*
* Mark metapages to discard. They will eventually be
* released, but should not be written.
*/
for (lblock = addr & ~(BlocksPerPage - 1); lblock < addr + len;
lblock += BlocksPerPage) {
struct folio *folio = filemap_lock_folio(mapping,
lblock >> l2BlocksPerPage);
if (IS_ERR(folio))
continue;
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp)
continue;
if (mp->index < addr)
continue;
if (mp->index >= addr + len)
break;
clear_bit(META_dirty, &mp->flag);
set_bit(META_discard, &mp->flag);
if (mp->lsn)
remove_from_logsync(mp);
}
folio_unlock(folio);
folio_put(folio);
}
}
#ifdef CONFIG_JFS_STATISTICS
int jfs_mpstat_proc_show(struct seq_file *m, void *v)
{
seq_printf(m,
"JFS Metapage statistics\n"
"=======================\n"
"page allocations = %d\n"
"page frees = %d\n"
"lock waits = %d\n",
mpStat.pagealloc,
mpStat.pagefree,
mpStat.lockwait);
return 0;
}
#endif
] |
| 184/4 | 2026/03/16 16:16 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 11344 output: 23 thoughts: 2004 Thoughts: **Analysis of `folio->mapping` Usage in `fs/jfs/jfs_metapage.c`** Alright, I've gone through the file and identified several instances where `folio->mapping` is accessed without a NULL check. My initial scan located six instances: in `insert_metapage`, `remove_metapage`, `last_write_complete`, `metapage_write_folio`, `metapage_read_folio`, and `metapage_write_one`. In the first two instances, within `insert_metapage` and `remove_metapage`, `folio->mapping->host->i_blkbits` is accessed. I noted that no null check is present for `folio->mapping` in either. Then I checked `last_write_complete`, where `folio->mapping` is passed directly to `mapping_set_error`. This function dereferences `mapping->flags`, so a NULL `folio->mapping` would cause a crash. The subsequent `metapage_write_folio` and `metapage_read_folio` functions both dereference `folio->mapping->host`, again, without a NULL check. Finally, in `metapage_write_one`, `folio->mapping` is assigned to `mapping`, and then the mapping is dereferenced within `filemap_check_errors`. My next step was to consider if these contexts even *allow* `folio->mapping` to be NULL. I know that, normally, pages in the page cache have their `mapping` set. However, folios can be truncated or evicted, which might lead to the `mapping` being cleared. Specifically, `metapage_write_folio`, `metapage_read_folio`, and `metapage_write_one` all seem to expect a valid `mapping` given they are called via `address_space_operations` or on folios meant to be in a mapping. I re-examined `insert_metapage` and `remove_metapage`, given their potential for being called from different contexts. I traced the call paths back to `__get_metapage`, `metapage_read_folio`, and `metapage_release_folio`. In `__get_metapage`, the folio comes from either `filemap_grab_folio` or `read_mapping_folio` which *use* a valid `mapping`, so the subsequent `folio->mapping` should also be valid. Likewise, `metapage_read_folio` is an `a_op`, and the mapping should be valid there. However, `metapage_release_folio` which calls `remove_metapage` is called when a folio is being freed. This context *might* allow `folio->mapping` to be NULL if the folio is being evicted, although `release_folio` usually happens while the folio is still in the mapping. Now I remember a recent fix. I will check the git log for this file. It is highly probable that the previous fix addresses such instances. |
| 185/4 | 2026/03/16 16:16 | tool | codesearch-find-references |
0mArgs:map[Name:insert_metapage] Results: map[References:[map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:__get_metapage SourceFile:fs/jfs/jfs_metapage.c SourceLine:772] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:__metapage_migrate_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:248] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:metapage_read_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:586]] TruncatedOutput:false] |
| 186/4 | 2026/03/16 16:16 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 14222 output: 23 thoughts: 13 Thoughts: Okay, here's my summary, taking the perspective of someone knowledgeable in the field: **Internal Audit: The Meta-Page Blind Spot** Damn it! I was so focused on the core logic and optimizations, I completely missed something crucial. I've been reviewing this code, and I've got to circle back to *`remove_metapage`* calls. I had that function firmly in my memory as important but I failed to identify and make a note to follow up on it. I need to make sure I haven't overlooked any instances of it being called and verify the behavior in each context, especially regarding memory management and potential side effects. My initial read-through was too shallow. I need to go deeper and account for the effects of this call in every scenario. I need to treat this as though I were running through a checklist, looking for possible failures, edge cases, and ensure the correct response in this area. It's a fundamental element and could have serious implications if handled incorrectly. This requires a dedicated, focused review, not just a casual glance. Back to the drawing board... |
| 187/4 | 2026/03/16 16:16 | tool | codesearch-find-references |
0mArgs:map[Name:remove_metapage] Results: map[References:[map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:__metapage_migrate_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:254] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:drop_metapage SourceFile:fs/jfs/jfs_metapage.c SourceLine:314] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:metapage_release_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:634]] TruncatedOutput:false] |
| 188/4 | 2026/03/16 16:16 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 14678 output: 21 thoughts: 1257 Thoughts: **Analyzing Potential NULL Dereference in JFS Metapage Handling**
Okay, let's dive into this. I need to understand a potential NULL pointer dereference, specifically around `folio->mapping` in the JFS file system. I started by checking the call chain of `drop_metapage`, realizing it's called from `release_metapage`, which is itself called from `put_metapage`. My initial thought was to search for recent commits that might address a similar issue, perhaps a fix for a null dereference involving `folio->mapping`. However, I don't have direct access to `git log` or documentation tools in this environment.
So, I decided to examine the code directly, thinking it would be quicker. I zeroed in on the usage of `folio->mapping` within the metapage functions. I found that both `insert_metapage` and `remove_metapage` dereference `folio->mapping` to access `host->i_blkbits`:
```c
// in insert_metapage:
if (mp) {
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
}
// in remove_metapage:
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a = folio->private;
int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
}
```
This immediately flagged a potential crash if `folio->mapping` is NULL.
I reviewed the call sites for `remove_metapage`. It's called from `__metapage_migrate_folio`, `drop_metapage`, and `metapage_release_folio`. I recall that `metapage_release_folio` is an `address_space_operation`. And now I remember, `release_folio` is invoked when the kernel needs to free the folio's private data, meaning it can be called while the folio is being truncated and `folio->mapping` might be set to NULL.
Examining `metapage_release_folio` itself solidified my concern:
```c
static bool metapage_release_folio(struct folio *folio, gfp_t gfp_mask)
{
struct metapage *mp;
bool ret = true;
int offset;
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp)
continue;
jfs_info("metapage_release_folio: mp = 0x%p", mp);
if (mp->count || mp->nohomeok ||
test_bit(META_dirty, &mp->flag)) {
jfs_info("count = %ld, nohomeok = %d", mp->count,
mp->nohomeok);
ret = false;
continue;
}
if (mp->lsn)
remove_from_logsync(mp);
remove_metapage(folio, mp);
INCREMENT(mpStat.pagefree);
free_metapage(mp);
}
return ret;
}
```
So, if `metapage_release_folio` is called, `remove_metapage` is definitely called and `folio->mapping` is dereferenced.
I'm thinking about the scenario: can JFS folios have `private` data without a `mapping`? Yes, I see that when a folio is removed from the page cache (like during truncation), `folio->mapping` becomes NULL. However, `release_folio` is usually called *before* the folio is completely removed from the mapping, or while something is still referencing it.
And then I had a nagging feeling: I remember a bug report, and so now I’m going through the code looking for any place `folio->mapping` is used, and if there are checks to handle a NULL condition.
I see `last_write_complete`:
```c
static void last_write_complete(struct folio *folio, blk_status_t status)
{
struct metapage *mp;
unsigned int offset;
if (status) {
int err = blk_status_to_errno(status);
printk(KERN_ERR "metapage_write_end_io: I/O error\n");
mapping_set_error(folio->mapping, err);
}
|
| 189/4 | 2026/03/16 16:16 | tool | codesearch-find-references |
0mArgs:map[Name:folio::mapping] Results: map[References:<nil> TruncatedOutput:false] |
| 190/4 | 2026/03/16 16:17 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 17002 output: 28 thoughts: 2948 Thoughts: **Analysis of Potential NULL Pointer Dereferences in JFS Metapage Code** Okay, so I've been digging into the JFS metapage code in `fs/jfs/jfs_metapage.c`, and I've got a list of places where `folio->mapping` is dereferenced. My initial hunt for cross-references didn't turn up much, so I'm doing a careful read of the code to find potential issues. The primary concern is if `folio->mapping` can be NULL in any of these contexts, leading to a crash. I've zeroed in on several functions where this could be a problem. In particular, both `insert_metapage` and `remove_metapage` dereference `folio->mapping` without any null checks. Also, `last_write_complete`, `metapage_write_folio`, `metapage_read_folio`, and `metapage_write_one` all look like potential failure points. `metapage_write_one` is interesting because it calls `filemap_check_errors` on `folio->mapping`. I'm thinking about the control flow here. `last_write_complete` is called from an I/O completion handler, but it could be that the folio's mapping gets cleared during truncation or other operations. The recent commit, which I don't have access to directly, likely addressed this potential problem. While I don't see the specific NULL check, I can see the code does not check if the value is `NULL`. I have been considering the code around `metapage_write_one`, which leads to `metapage_write_folio`. `metapage_write_one` is called by functions like `release_metapage`, meaning a folio might not necessarily be tied to the inode's mapping. I know that JFS metapages are generally part of an inode's mapping, either directly or indirectly. The question is whether eviction or truncation or other things can cause this mapping to be cleared and then the folio's mapping dereferenced later without a NULL check. I'm now looking at other potential call sites. I have checked for instances where `folio->mapping` *is* NULL-checked in these functions, and it seems this is not the case in any of the functions. `remove_metapage` is called by `metapage_release_folio`. If `folio->mapping` is NULL when `metapage_release_folio` is called, it crashes. The critical point is: could `folio->mapping` be NULL *before* `metapage_release_folio` is called? The answer is yes, if the folio is freed after being removed from the mapping. I'm also looking at `metapage_invalidate_folio`, which calls `metapage_release_folio`. The same considerations apply. I am re-checking the code to see if there are other potential sources of the dereference, without checks, for `folio->mapping`. I've identified the functions: 1. `insert_metapage` 2. `remove_metapage` 3. `last_write_complete` 4. `metapage_write_folio` 5. `metapage_read_folio` 6. `metapage_write_one` where `folio->mapping` is dereferenced without checking if it's NULL, therefore these might be potential issues. |
| 191/4 | 2026/03/16 16:17 | tool | read-file |
0mArgs:map[File:fs/jfs/jfs_metapage.c] Results: map[Contents:// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright (C) International Business Machines Corp., 2000-2005
* Portions Copyright (C) Christoph Hellwig, 2001-2002
*/
#include <linux/blkdev.h>
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/bio.h>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/buffer_head.h>
#include <linux/mempool.h>
#include <linux/seq_file.h>
#include <linux/writeback.h>
#include <linux/migrate.h>
#include "jfs_incore.h"
#include "jfs_superblock.h"
#include "jfs_filsys.h"
#include "jfs_metapage.h"
#include "jfs_txnmgr.h"
#include "jfs_debug.h"
#ifdef CONFIG_JFS_STATISTICS
static struct {
uint pagealloc; /* # of page allocations */
uint pagefree; /* # of page frees */
uint lockwait; /* # of sleeping lock_metapage() calls */
} mpStat;
#endif
#define metapage_locked(mp) test_bit(META_locked, &(mp)->flag)
#define trylock_metapage(mp) test_and_set_bit_lock(META_locked, &(mp)->flag)
static inline void unlock_metapage(struct metapage *mp)
{
clear_bit_unlock(META_locked, &mp->flag);
wake_up(&mp->wait);
}
static inline void __lock_metapage(struct metapage *mp)
{
DECLARE_WAITQUEUE(wait, current);
INCREMENT(mpStat.lockwait);
add_wait_queue_exclusive(&mp->wait, &wait);
do {
set_current_state(TASK_UNINTERRUPTIBLE);
if (metapage_locked(mp)) {
folio_unlock(mp->folio);
io_schedule();
folio_lock(mp->folio);
}
} while (trylock_metapage(mp));
__set_current_state(TASK_RUNNING);
remove_wait_queue(&mp->wait, &wait);
}
/*
* Must have mp->folio locked
*/
static inline void lock_metapage(struct metapage *mp)
{
if (trylock_metapage(mp))
__lock_metapage(mp);
}
#define METAPOOL_MIN_PAGES 32
static struct kmem_cache *metapage_cache;
static mempool_t *metapage_mempool;
#define MPS_PER_PAGE (PAGE_SIZE >> L2PSIZE)
#if MPS_PER_PAGE > 1
struct meta_anchor {
int mp_count;
atomic_t io_count;
blk_status_t status;
struct metapage *mp[MPS_PER_PAGE];
};
static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
{
struct meta_anchor *anchor = folio->private;
if (!anchor)
return NULL;
return anchor->mp[offset >> L2PSIZE];
}
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a;
int index;
int l2mp_blocks; /* log2 blocks per metapage */
a = folio->private;
if (!a) {
a = kzalloc_obj(struct meta_anchor, GFP_NOFS);
if (!a)
return -ENOMEM;
folio_attach_private(folio, a);
kmap(&folio->page);
}
if (mp) {
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
a->mp_count++;
a->mp[index] = mp;
}
return 0;
}
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a = folio->private;
int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
int index;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
BUG_ON(a->mp[index] != mp);
a->mp[index] = NULL;
if (--a->mp_count == 0) {
kfree(a);
folio_detach_private(folio);
kunmap(&folio->page);
}
}
static inline void inc_io(struct folio *folio)
{
struct meta_anchor *anchor = folio->private;
atomic_inc(&anchor->io_count);
}
static inline void dec_io(struct folio *folio, blk_status_t status,
void (*handler)(struct folio *, blk_status_t))
{
struct meta_anchor *anchor = folio->private;
if (anchor->status == BLK_STS_OK)
anchor->status = status;
if (atomic_dec_and_test(&anchor->io_count))
handler(folio, anchor->status);
}
#ifdef CONFIG_MIGRATION
static int __metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
struct meta_anchor *src_anchor = src->private;
struct metapage *mps[MPS_PER_PAGE] = {0};
struct metapage *mp;
int i, rc;
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = src_anchor->mp[i];
if (mp && metapage_locked(mp))
return -EAGAIN;
}
rc = filemap_migrate_folio(mapping, dst, src, mode);
if (rc)
return rc;
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = src_anchor->mp[i];
if (!mp)
continue;
if (unlikely(insert_metapage(dst, mp))) {
/* If error, roll-back previosly inserted pages */
for (int j = 0 ; j < i; j++) {
if (mps[j])
remove_metapage(dst, mps[j]);
}
return -EAGAIN;
}
mps[i] = mp;
}
/* Update the metapage and remove it from src */
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = mps[i];
if (mp) {
int page_offset = mp->data - folio_address(src);
mp->data = folio_address(dst) + page_offset;
mp->folio = dst;
remove_metapage(src, mp);
}
}
return 0;
}
#endif /* CONFIG_MIGRATION */
#else
static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
{
return folio->private;
}
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
if (mp) {
folio_attach_private(folio, mp);
kmap(&folio->page);
}
return 0;
}
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
folio_detach_private(folio);
kunmap(&folio->page);
}
#define inc_io(folio) do {} while(0)
#define dec_io(folio, status, handler) handler(folio, status)
#ifdef CONFIG_MIGRATION
static int __metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
struct metapage *mp;
int page_offset;
int rc;
mp = folio_to_mp(src, 0);
if (metapage_locked(mp))
return -EAGAIN;
rc = filemap_migrate_folio(mapping, dst, src, mode);
if (rc)
return rc;
if (unlikely(insert_metapage(dst, mp)))
return -EAGAIN;
page_offset = mp->data - folio_address(src);
mp->data = folio_address(dst) + page_offset;
mp->folio = dst;
remove_metapage(src, mp);
return 0;
}
#endif /* CONFIG_MIGRATION */
#endif
static inline struct metapage *alloc_metapage(gfp_t gfp_mask)
{
struct metapage *mp = mempool_alloc(metapage_mempool, gfp_mask);
if (mp) {
mp->lid = 0;
mp->lsn = 0;
mp->data = NULL;
mp->clsn = 0;
mp->log = NULL;
init_waitqueue_head(&mp->wait);
}
return mp;
}
static inline void free_metapage(struct metapage *mp)
{
mempool_free(mp, metapage_mempool);
}
int __init metapage_init(void)
{
/*
* Allocate the metapage structures
*/
metapage_cache = kmem_cache_create("jfs_mp", sizeof(struct metapage),
0, 0, NULL);
if (metapage_cache == NULL)
return -ENOMEM;
metapage_mempool = mempool_create_slab_pool(METAPOOL_MIN_PAGES,
metapage_cache);
if (metapage_mempool == NULL) {
kmem_cache_destroy(metapage_cache);
return -ENOMEM;
}
return 0;
}
void metapage_exit(void)
{
mempool_destroy(metapage_mempool);
kmem_cache_destroy(metapage_cache);
}
static inline void drop_metapage(struct folio *folio, struct metapage *mp)
{
if (mp->count || mp->nohomeok || test_bit(META_dirty, &mp->flag) ||
test_bit(META_io, &mp->flag))
return;
remove_metapage(folio, mp);
INCREMENT(mpStat.pagefree);
free_metapage(mp);
}
/*
* Metapage address space operations
*/
static sector_t metapage_get_blocks(struct inode *inode, sector_t lblock,
int *len)
{
int rc = 0;
int xflag;
s64 xaddr;
sector_t file_blocks = (inode->i_size + inode->i_sb->s_blocksize - 1) >>
inode->i_blkbits;
if (lblock >= file_blocks)
return 0;
if (lblock + *len > file_blocks)
*len = file_blocks - lblock;
if (inode->i_ino) {
rc = xtLookup(inode, (s64)lblock, *len, &xflag, &xaddr, len, 0);
if ((rc == 0) && *len)
lblock = (sector_t)xaddr;
else
lblock = 0;
} /* else no mapping */
return lblock;
}
static void last_read_complete(struct folio *folio, blk_status_t status)
{
if (status)
printk(KERN_ERR "Read error %d at %#llx\n", status,
folio_pos(folio));
folio_end_read(folio, status == 0);
}
static void metapage_read_end_io(struct bio *bio)
{
struct folio *folio = bio->bi_private;
dec_io(folio, bio->bi_status, last_read_complete);
bio_put(bio);
}
static void remove_from_logsync(struct metapage *mp)
{
struct jfs_log *log = mp->log;
unsigned long flags;
/*
* This can race. Recheck that log hasn't been set to null, and after
* acquiring logsync lock, recheck lsn
*/
if (!log)
return;
LOGSYNC_LOCK(log, flags);
if (mp->lsn) {
mp->log = NULL;
mp->lsn = 0;
mp->clsn = 0;
log->count--;
list_del(&mp->synclist);
}
LOGSYNC_UNLOCK(log, flags);
}
static void last_write_complete(struct folio *folio, blk_status_t status)
{
struct metapage *mp;
unsigned int offset;
if (status) {
int err = blk_status_to_errno(status);
printk(KERN_ERR "metapage_write_end_io: I/O error\n");
mapping_set_error(folio->mapping, err);
}
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (mp && test_bit(META_io, &mp->flag)) {
if (mp->lsn)
remove_from_logsync(mp);
clear_bit(META_io, &mp->flag);
}
/*
* I'd like to call drop_metapage here, but I don't think it's
* safe unless I have the page locked
*/
}
folio_end_writeback(folio);
}
static void metapage_write_end_io(struct bio *bio)
{
struct folio *folio = bio->bi_private;
BUG_ON(!folio->private);
dec_io(folio, bio->bi_status, last_write_complete);
bio_put(bio);
}
static int metapage_write_folio(struct folio *folio,
struct writeback_control *wbc)
{
struct bio *bio = NULL;
int block_offset; /* block offset of mp within page */
struct inode *inode = folio->mapping->host;
int blocks_per_mp = JFS_SBI(inode->i_sb)->nbperpage;
int len;
int xlen;
struct metapage *mp;
int redirty = 0;
sector_t lblock;
int nr_underway = 0;
sector_t pblock;
sector_t next_block = 0;
sector_t page_start;
unsigned long bio_bytes = 0;
unsigned long bio_offset = 0;
int offset;
int bad_blocks = 0;
page_start = folio_pos(folio) >> inode->i_blkbits;
BUG_ON(!folio_test_locked(folio));
BUG_ON(folio_test_writeback(folio));
folio_start_writeback(folio);
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp || !test_bit(META_dirty, &mp->flag))
continue;
if (mp->nohomeok && !test_bit(META_forcewrite, &mp->flag)) {
redirty = 1;
/*
* Make sure this page isn't blocked indefinitely.
* If the journal isn't undergoing I/O, push it
*/
if (mp->log && !(mp->log->cflag & logGC_PAGEOUT))
jfs_flush_journal(mp->log, 0);
continue;
}
clear_bit(META_dirty, &mp->flag);
set_bit(META_io, &mp->flag);
block_offset = offset >> inode->i_blkbits;
lblock = page_start + block_offset;
if (bio) {
if (xlen && lblock == next_block) {
/* Contiguous, in memory & on disk */
len = min(xlen, blocks_per_mp);
xlen -= len;
bio_bytes += len << inode->i_blkbits;
continue;
}
/* Not contiguous */
bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
/*
* Increment counter before submitting i/o to keep
* count from hitting zero before we're through
*/
inc_io(folio);
if (!bio->bi_iter.bi_size)
goto dump_bio;
submit_bio(bio);
nr_underway++;
bio = NULL;
} else
inc_io(folio);
xlen = (folio_size(folio) - offset) >> inode->i_blkbits;
pblock = metapage_get_blocks(inode, lblock, &xlen);
if (!pblock) {
printk(KERN_ERR "JFS: metapage_get_blocks failed\n");
/*
* We already called inc_io(), but can't cancel it
* with dec_io() until we're done with the page
*/
bad_blocks++;
continue;
}
len = min(xlen, (int)JFS_SBI(inode->i_sb)->nbperpage);
bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_WRITE, GFP_NOFS);
bio->bi_iter.bi_sector = pblock << (inode->i_blkbits - 9);
bio->bi_end_io = metapage_write_end_io;
bio->bi_private = folio;
/* Don't call bio_add_page yet, we may add to this vec */
bio_offset = offset;
bio_bytes = len << inode->i_blkbits;
xlen -= len;
next_block = lblock + len;
}
if (bio) {
bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
if (!bio->bi_iter.bi_size)
goto dump_bio;
submit_bio(bio);
nr_underway++;
}
if (redirty)
folio_redirty_for_writepage(wbc, folio);
folio_unlock(folio);
if (bad_blocks)
goto err_out;
if (nr_underway == 0)
folio_end_writeback(folio);
return 0;
dump_bio:
print_hex_dump(KERN_ERR, "JFS: dump of bio: ", DUMP_PREFIX_ADDRESS, 16,
4, bio, sizeof(*bio), 0);
bio_put(bio);
folio_unlock(folio);
dec_io(folio, BLK_STS_OK, last_write_complete);
err_out:
while (bad_blocks--)
dec_io(folio, BLK_STS_OK, last_write_complete);
return -EIO;
}
static int metapage_writepages(struct address_space *mapping,
struct writeback_control *wbc)
{
struct blk_plug plug;
struct folio *folio = NULL;
int err;
blk_start_plug(&plug);
while ((folio = writeback_iter(mapping, wbc, folio, &err)))
err = metapage_write_folio(folio, wbc);
blk_finish_plug(&plug);
return err;
}
static int metapage_read_folio(struct file *fp, struct folio *folio)
{
struct inode *inode = folio->mapping->host;
struct bio *bio = NULL;
int block_offset;
int blocks_per_page = i_blocks_per_folio(inode, folio);
sector_t page_start; /* address of page in fs blocks */
sector_t pblock;
int xlen;
unsigned int len;
int offset;
BUG_ON(!folio_test_locked(folio));
page_start = folio_pos(folio) >> inode->i_blkbits;
block_offset = 0;
while (block_offset < blocks_per_page) {
xlen = blocks_per_page - block_offset;
pblock = metapage_get_blocks(inode, page_start + block_offset,
&xlen);
if (pblock) {
if (!folio->private)
insert_metapage(folio, NULL);
inc_io(folio);
if (bio)
submit_bio(bio);
bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_READ,
GFP_NOFS);
bio->bi_iter.bi_sector =
pblock << (inode->i_blkbits - 9);
bio->bi_end_io = metapage_read_end_io;
bio->bi_private = folio;
len = xlen << inode->i_blkbits;
offset = block_offset << inode->i_blkbits;
bio_add_folio_nofail(bio, folio, len, offset);
block_offset += xlen;
} else
block_offset++;
}
if (bio)
submit_bio(bio);
else
folio_unlock(folio);
return 0;
}
static bool metapage_release_folio(struct folio *folio, gfp_t gfp_mask)
{
struct metapage *mp;
bool ret = true;
int offset;
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp)
continue;
jfs_info("metapage_release_folio: mp = 0x%p", mp);
if (mp->count || mp->nohomeok ||
test_bit(META_dirty, &mp->flag)) {
jfs_info("count = %ld, nohomeok = %d", mp->count,
mp->nohomeok);
ret = false;
continue;
}
if (mp->lsn)
remove_from_logsync(mp);
remove_metapage(folio, mp);
INCREMENT(mpStat.pagefree);
free_metapage(mp);
}
return ret;
}
#ifdef CONFIG_MIGRATION
/*
* metapage_migrate_folio - Migration function for JFS metapages
*/
static int metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
int expected_count;
if (!src->private)
return filemap_migrate_folio(mapping, dst, src, mode);
/* Check whether page does not have extra refs before we do more work */
expected_count = folio_expected_ref_count(src) + 1;
if (folio_ref_count(src) != expected_count)
return -EAGAIN;
return __metapage_migrate_folio(mapping, dst, src, mode);
}
#else
#define metapage_migrate_folio NULL
#endif /* CONFIG_MIGRATION */
static void metapage_invalidate_folio(struct folio *folio, size_t offset,
size_t length)
{
BUG_ON(offset || length < folio_size(folio));
BUG_ON(folio_test_writeback(folio));
metapage_release_folio(folio, 0);
}
const struct address_space_operations jfs_metapage_aops = {
.read_folio = metapage_read_folio,
.writepages = metapage_writepages,
.release_folio = metapage_release_folio,
.invalidate_folio = metapage_invalidate_folio,
.dirty_folio = filemap_dirty_folio,
.migrate_folio = metapage_migrate_folio,
};
struct metapage *__get_metapage(struct inode *inode, unsigned long lblock,
unsigned int size, int absolute,
unsigned long new)
{
int l2BlocksPerPage;
int l2bsize;
struct address_space *mapping;
struct metapage *mp = NULL;
struct folio *folio;
unsigned long page_index;
unsigned long page_offset;
jfs_info("__get_metapage: ino = %ld, lblock = 0x%lx, abs=%d",
inode->i_ino, lblock, absolute);
l2bsize = inode->i_blkbits;
l2BlocksPerPage = PAGE_SHIFT - l2bsize;
page_index = lblock >> l2BlocksPerPage;
page_offset = (lblock - (page_index << l2BlocksPerPage)) << l2bsize;
if ((page_offset + size) > PAGE_SIZE) {
jfs_err("MetaData crosses page boundary!!");
jfs_err("lblock = %lx, size = %d", lblock, size);
dump_stack();
return NULL;
}
if (absolute)
mapping = JFS_SBI(inode->i_sb)->direct_inode->i_mapping;
else {
/*
* If an nfs client tries to read an inode that is larger
* than any existing inodes, we may try to read past the
* end of the inode map
*/
if ((lblock << inode->i_blkbits) >= inode->i_size)
return NULL;
mapping = inode->i_mapping;
}
if (new && (PSIZE == PAGE_SIZE)) {
folio = filemap_grab_folio(mapping, page_index);
if (IS_ERR(folio)) {
jfs_err("filemap_grab_folio failed!");
return NULL;
}
folio_mark_uptodate(folio);
} else {
folio = read_mapping_folio(mapping, page_index, NULL);
if (IS_ERR(folio)) {
jfs_err("read_mapping_page failed!");
return NULL;
}
folio_lock(folio);
}
mp = folio_to_mp(folio, page_offset);
if (mp) {
if (mp->logical_size != size) {
jfs_error(inode->i_sb,
"get_mp->logical_size != size\n");
jfs_err("logical_size = %d, size = %d",
mp->logical_size, size);
dump_stack();
goto unlock;
}
mp->count++;
lock_metapage(mp);
if (test_bit(META_discard, &mp->flag)) {
if (!new) {
jfs_error(inode->i_sb,
"using a discarded metapage\n");
discard_metapage(mp);
goto unlock;
}
clear_bit(META_discard, &mp->flag);
}
} else {
INCREMENT(mpStat.pagealloc);
mp = alloc_metapage(GFP_NOFS);
if (!mp)
goto unlock;
mp->folio = folio;
mp->sb = inode->i_sb;
mp->flag = 0;
mp->xflag = COMMIT_PAGE;
mp->count = 1;
mp->nohomeok = 0;
mp->logical_size = size;
mp->data = folio_address(folio) + page_offset;
mp->index = lblock;
if (unlikely(insert_metapage(folio, mp))) {
free_metapage(mp);
goto unlock;
}
lock_metapage(mp);
}
if (new) {
jfs_info("zeroing mp = 0x%p", mp);
memset(mp->data, 0, PSIZE);
}
folio_unlock(folio);
jfs_info("__get_metapage: returning = 0x%p data = 0x%p", mp, mp->data);
return mp;
unlock:
folio_unlock(folio);
return NULL;
}
void grab_metapage(struct metapage * mp)
{
jfs_info("grab_metapage: mp = 0x%p", mp);
folio_get(mp->folio);
folio_lock(mp->folio);
mp->count++;
lock_metapage(mp);
folio_unlock(mp->folio);
}
static int metapage_write_one(struct folio *folio)
{
struct address_space *mapping = folio->mapping;
struct writeback_control wbc = {
.sync_mode = WB_SYNC_ALL,
.nr_to_write = folio_nr_pages(folio),
};
int ret = 0;
BUG_ON(!folio_test_locked(folio));
folio_wait_writeback(folio);
if (folio_clear_dirty_for_io(folio)) {
folio_get(folio);
ret = metapage_write_folio(folio, &wbc);
if (ret == 0)
folio_wait_writeback(folio);
folio_put(folio);
} else {
folio_unlock(folio);
}
if (!ret)
ret = filemap_check_errors(mapping);
return ret;
}
void force_metapage(struct metapage *mp)
{
struct folio *folio = mp->folio;
jfs_info("force_metapage: mp = 0x%p", mp);
set_bit(META_forcewrite, &mp->flag);
clear_bit(META_sync, &mp->flag);
folio_get(folio);
folio_lock(folio);
folio_mark_dirty(folio);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
clear_bit(META_forcewrite, &mp->flag);
folio_put(folio);
}
void hold_metapage(struct metapage *mp)
{
folio_lock(mp->folio);
}
void put_metapage(struct metapage *mp)
{
if (mp->count || mp->nohomeok) {
/* Someone else will release this */
folio_unlock(mp->folio);
return;
}
folio_get(mp->folio);
mp->count++;
lock_metapage(mp);
folio_unlock(mp->folio);
release_metapage(mp);
}
void release_metapage(struct metapage * mp)
{
struct folio *folio = mp->folio;
jfs_info("release_metapage: mp = 0x%p, flag = 0x%lx", mp, mp->flag);
folio_lock(folio);
unlock_metapage(mp);
assert(mp->count);
if (--mp->count || mp->nohomeok) {
folio_unlock(folio);
folio_put(folio);
return;
}
if (test_bit(META_dirty, &mp->flag)) {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
} else if (mp->lsn) /* discard_metapage doesn't remove it */
remove_from_logsync(mp);
/* Try to keep metapages from using up too much memory */
drop_metapage(folio, mp);
folio_unlock(folio);
folio_put(folio);
}
void __invalidate_metapages(struct inode *ip, s64 addr, int len)
{
sector_t lblock;
int l2BlocksPerPage = PAGE_SHIFT - ip->i_blkbits;
int BlocksPerPage = 1 << l2BlocksPerPage;
/* All callers are interested in block device's mapping */
struct address_space *mapping =
JFS_SBI(ip->i_sb)->direct_inode->i_mapping;
struct metapage *mp;
unsigned int offset;
/*
* Mark metapages to discard. They will eventually be
* released, but should not be written.
*/
for (lblock = addr & ~(BlocksPerPage - 1); lblock < addr + len;
lblock += BlocksPerPage) {
struct folio *folio = filemap_lock_folio(mapping,
lblock >> l2BlocksPerPage);
if (IS_ERR(folio))
continue;
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp)
continue;
if (mp->index < addr)
continue;
if (mp->index >= addr + len)
break;
clear_bit(META_dirty, &mp->flag);
set_bit(META_discard, &mp->flag);
if (mp->lsn)
remove_from_logsync(mp);
}
folio_unlock(folio);
folio_put(folio);
}
}
#ifdef CONFIG_JFS_STATISTICS
int jfs_mpstat_proc_show(struct seq_file *m, void *v)
{
seq_printf(m,
"JFS Metapage statistics\n"
"=======================\n"
"page allocations = %d\n"
"page frees = %d\n"
"lock waits = %d\n",
mpStat.pagealloc,
mpStat.pagefree,
mpStat.lockwait);
return 0;
}
#endif
] |
| 192/4 | 2026/03/16 16:17 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 29316 output: 28 thoughts: 1827 Thoughts: **My Analysis of `folio->mapping` Dereferences** Okay, here's what I've got after reviewing this code. The user's question, "Is there any *other* place...", suggests they're already aware of some instances of potentially unsafe dereferencing of `folio->mapping`. I need to be thorough. I'm focusing on places where `folio->mapping` is accessed without a NULL check. I've found these instances: 1. **`insert_metapage` (line 117):** `l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;` - Dereferences `folio->mapping` directly to access the inode. Crucially, though, this is only true if `MPS_PER_PAGE > 1`. Otherwise, the functions are different. 2. **`remove_metapage` (line 128):** `int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;` - Similar to above; dereferences `folio->mapping` but only if `MPS_PER_PAGE > 1`. 3. **`last_write_complete` (line 450):** `mapping_set_error(folio->mapping, err);` - Uses `folio->mapping` directly, which could be problematic in an asynchronous context. I should double check that this wasn't recently fixed. 4. **`metapage_write_folio` (line 476):** `struct inode *inode = folio->mapping->host;` - Dereferences `folio->mapping` to get the `inode`. 5. **`metapage_read_folio` (line 571):** `struct inode *inode = folio->mapping->host;` - Again, dereferences to get the `inode`. 6. **`metapage_write_one` (line 818):** `struct address_space *mapping = folio->mapping;` (and `filemap_check_errors(mapping)` at line 837) - Dereferences `folio->mapping`. Now, I also took a look at `__metapage_migrate_folio`. It uses the `mapping` argument which *could* be relevant, but the critical part is that it calls both `insert_metapage` and `remove_metapage`. Therefore, depending on the value of `MPS_PER_PAGE`, it's indirectly affected by the potential `NULL` dereference through those functions. I did a quick scan for other uses of `folio->mapping` but didn't find any. I also checked `folio_mapping(folio)`, but it wasn't there. I also looked at a related function `i_blocks_per_folio`. It is likely that these function could cause the same NULL dereference on `folio->mapping`. It's worth emphasizing the *potential* impact. If `folio->mapping` is indeed `NULL` in any of these places, the code will crash. The asynchronous nature of `last_write_complete` might make that a particularly nasty one. Also, `insert_metapage` and `remove_metapage` might behave differently based on `MPS_PER_PAGE`. I should also investigate how `MPS_PER_PAGE` is defined. The user might be zeroing in on one particular vulnerable path. |
| 193/4 | 2026/03/16 16:17 | tool | read-file |
0mArgs:map[File:fs/jfs/jfs_metapage.h] Results: map[Contents:/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) International Business Machines Corp., 2000-2002
* Portions Copyright (C) Christoph Hellwig, 2001-2002
*/
#ifndef _H_JFS_METAPAGE
#define _H_JFS_METAPAGE
#include <linux/pagemap.h>
struct metapage {
/* Common logsyncblk prefix (see jfs_logmgr.h) */
u16 xflag;
u16 unused;
lid_t lid;
int lsn;
struct list_head synclist;
/* End of logsyncblk prefix */
unsigned long flag; /* See Below */
unsigned long count; /* Reference count */
void *data; /* Data pointer */
sector_t index; /* block address of page */
wait_queue_head_t wait;
/* implementation */
struct folio *folio;
struct super_block *sb;
unsigned int logical_size;
/* Journal management */
int clsn;
int nohomeok;
struct jfs_log *log;
};
/* metapage flag */
#define META_locked 0
#define META_dirty 2
#define META_sync 3
#define META_discard 4
#define META_forcewrite 5
#define META_io 6
#define mark_metapage_dirty(mp) set_bit(META_dirty, &(mp)->flag)
/* function prototypes */
extern int metapage_init(void);
extern void metapage_exit(void);
extern struct metapage *__get_metapage(struct inode *inode,
unsigned long lblock, unsigned int size,
int absolute, unsigned long new);
#define read_metapage(inode, lblock, size, absolute)\
__get_metapage(inode, lblock, size, absolute, false)
#define get_metapage(inode, lblock, size, absolute)\
__get_metapage(inode, lblock, size, absolute, true)
extern void release_metapage(struct metapage *);
extern void grab_metapage(struct metapage *);
extern void force_metapage(struct metapage *);
/*
* hold_metapage and put_metapage are used in conjunction. The page lock
* is not dropped between the two, so no other threads can get or release
* the metapage
*/
extern void hold_metapage(struct metapage *);
extern void put_metapage(struct metapage *);
static inline void write_metapage(struct metapage *mp)
{
set_bit(META_dirty, &mp->flag);
release_metapage(mp);
}
static inline void flush_metapage(struct metapage *mp)
{
set_bit(META_sync, &mp->flag);
write_metapage(mp);
}
static inline void discard_metapage(struct metapage *mp)
{
clear_bit(META_dirty, &mp->flag);
set_bit(META_discard, &mp->flag);
release_metapage(mp);
}
static inline void metapage_nohomeok(struct metapage *mp)
{
struct folio *folio = mp->folio;
folio_lock(folio);
if (!mp->nohomeok++) {
mark_metapage_dirty(mp);
folio_get(folio);
folio_wait_writeback(folio);
}
folio_unlock(folio);
}
/*
* This serializes access to mp->lsn when metapages are added to logsynclist
* without setting nohomeok. i.e. updating imap & dmap
*/
static inline void metapage_wait_for_io(struct metapage *mp)
{
if (test_bit(META_io, &mp->flag))
folio_wait_writeback(mp->folio);
}
/*
* This is called when already holding the metapage
*/
static inline void _metapage_homeok(struct metapage *mp)
{
if (!--mp->nohomeok)
folio_put(mp->folio);
}
static inline void metapage_homeok(struct metapage *mp)
{
hold_metapage(mp);
_metapage_homeok(mp);
put_metapage(mp);
}
extern const struct address_space_operations jfs_metapage_aops;
/*
* This routines invalidate all pages for an extent.
*/
extern void __invalidate_metapages(struct inode *, s64, int);
#define invalidate_pxd_metapages(ip, pxd) \
__invalidate_metapages((ip), addressPXD(&(pxd)), lengthPXD(&(pxd)))
#define invalidate_dxd_metapages(ip, dxd) \
__invalidate_metapages((ip), addressDXD(&(dxd)), lengthDXD(&(dxd)))
#define invalidate_xad_metapages(ip, xad) \
__invalidate_metapages((ip), addressXAD(&(xad)), lengthXAD(&(xad)))
#endif /* _H_JFS_METAPAGE */
] |
| 194/4 | 2026/03/16 16:17 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 33211 output: 27 thoughts: 35 Thoughts: Okay, here's my take on those thoughts, framed as an expert's inner monologue: **Investigating a Potential JFS Header Issue** Hmm, hold on a second. I'm looking at this JFS code, and it's flagged an undeclared identifier, `L2PSIZE`, specifically in `jfs_metapage.h`. That's unusual. My first instinct, given the context, is that this constant, likely related to page sizes or block mapping, *should* be defined somewhere. It feels like a fundamental building block. Okay, *it's not there*, so let me immediately pivot and check `jfs_filsys.h`. It's highly probable it’s defined there, or in a file it includes. This looks like a basic dependency issue, but I need to quickly confirm its definition or lack thereof. I'm betting it's hiding in the file that defines filesystem structures, though. Let's get to the bottom of this. |
| 195/4 | 2026/03/16 16:17 | tool | read-file |
0mArgs:map[File:fs/jfs/jfs_filsys.h] Results: map[Contents:/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * Copyright (C) International Business Machines Corp., 2000-2003 */ #ifndef _H_JFS_FILSYS #define _H_JFS_FILSYS /* * jfs_filsys.h * * file system (implementation-dependent) constants * * refer to <limits.h> for system wide implementation-dependent constants */ /* * file system option (superblock flag) */ /* directory option */ #define JFS_UNICODE 0x00000001 /* unicode name */ /* mount time flags for error handling */ #define JFS_ERR_REMOUNT_RO 0x00000002 /* remount read-only */ #define JFS_ERR_CONTINUE 0x00000004 /* continue */ #define JFS_ERR_PANIC 0x00000008 /* panic */ #define JFS_ERR_MASK (JFS_ERR_REMOUNT_RO|JFS_ERR_CONTINUE|JFS_ERR_PANIC) /* Quota support */ #define JFS_USRQUOTA 0x00000010 #define JFS_GRPQUOTA 0x00000020 /* mount time flag to disable journaling to disk */ #define JFS_NOINTEGRITY 0x00000040 /* mount time flag to enable TRIM to ssd disks */ #define JFS_DISCARD 0x00000080 /* commit option */ #define JFS_COMMIT 0x00000f00 /* commit option mask */ #define JFS_GROUPCOMMIT 0x00000100 /* group (of 1) commit */ #define JFS_LAZYCOMMIT 0x00000200 /* lazy commit */ #define JFS_TMPFS 0x00000400 /* temporary file system - * do not log/commit: * Never implemented */ /* log logical volume option */ #define JFS_INLINELOG 0x00000800 /* inline log within file system */ #define JFS_INLINEMOVE 0x00001000 /* inline log being moved */ /* Secondary aggregate inode table */ #define JFS_BAD_SAIT 0x00010000 /* current secondary ait is bad */ /* sparse regular file support */ #define JFS_SPARSE 0x00020000 /* sparse regular file */ /* DASD Limits F226941 */ #define JFS_DASD_ENABLED 0x00040000 /* DASD limits enabled */ #define JFS_DASD_PRIME 0x00080000 /* Prime DASD usage on boot */ /* big endian flag */ #define JFS_SWAP_BYTES 0x00100000 /* running on big endian computer */ /* Directory index */ #define JFS_DIR_INDEX 0x00200000 /* Persistent index for */ /* platform options */ #define JFS_LINUX 0x10000000 /* Linux support */ #define JFS_DFS 0x20000000 /* DCE DFS LFS support */ /* Never implemented */ #define JFS_OS2 0x40000000 /* OS/2 support */ /* case-insensitive name/directory support */ #define JFS_AIX 0x80000000 /* AIX support */ /* * buffer cache configuration */ /* page size */ #ifdef PSIZE #undef PSIZE #endif #define PSIZE 4096 /* page size (in byte) */ #define L2PSIZE 12 /* log2(PSIZE) */ #define POFFSET 4095 /* offset within page */ /* buffer page size */ #define BPSIZE PSIZE /* * fs fundamental size * * PSIZE >= file system block size >= PBSIZE >= DISIZE */ #define PBSIZE 512 /* physical block size (in byte) */ #define L2PBSIZE 9 /* log2(PBSIZE) */ #define DISIZE 512 /* on-disk inode size (in byte) */ #define L2DISIZE 9 /* log2(DISIZE) */ #define IDATASIZE 256 /* inode inline data size */ #define IXATTRSIZE 128 /* inode inline extended attribute size */ #define XTPAGE_SIZE 4096 #define log2_PAGESIZE 12 #define IAG_SIZE 4096 #define IAG_EXTENT_SIZE 4096 #define INOSPERIAG 4096 /* number of disk inodes per iag */ #define L2INOSPERIAG 12 /* l2 number of disk inodes per iag */ #define INOSPEREXT 32 /* number of disk inode per extent */ #define L2INOSPEREXT 5 /* l2 number of disk inode per extent */ #define IXSIZE (DISIZE * INOSPEREXT) /* inode extent size */ #define INOSPERPAGE 8 /* number of disk inodes per 4K page */ #define L2INOSPERPAGE 3 /* log2(INOSPERPAGE) */ #define IAGFREELIST_LWM 64 #define INODE_EXTENT_SIZE IXSIZE /* inode extent size */ #define NUM_INODE_PER_EXTENT INOSPEREXT #define NUM_INODE_PER_IAG INOSPERIAG #define MINBLOCKSIZE 512 #define L2MINBLOCKSIZE 9 #define MAXBLOCKSIZE 4096 #define L2MAXBLOCKSIZE 12 #define MAXFILESIZE ((s64)1 << 52) #define JFS_LINK_MAX 0xffffffff /* Minimum number of bytes supported for a JFS partition */ #define MINJFS (0x1000000) #define MINJFSTEXT "16" /* * file system block size -> physical block size */ #define LBOFFSET(x) ((x) & (PBSIZE - 1)) #define LBNUMBER(x) ((x) >> L2PBSIZE) #define LBLK2PBLK(sb,b) ((b) << (sb->s_blocksize_bits - L2PBSIZE)) #define PBLK2LBLK(sb,b) ((b) >> (sb->s_blocksize_bits - L2PBSIZE)) /* size in byte -> last page number */ #define SIZE2PN(size) ( ((s64)((size) - 1)) >> (L2PSIZE) ) /* size in byte -> last file system block number */ #define SIZE2BN(size, l2bsize) ( ((s64)((size) - 1)) >> (l2bsize) ) /* * fixed physical block address (physical block size = 512 byte) * * NOTE: since we can't guarantee a physical block size of 512 bytes the use of * these macros should be removed and the byte offset macros used instead. */ #define SUPER1_B 64 /* primary superblock */ #define AIMAP_B (SUPER1_B + 8) /* 1st extent of aggregate inode map */ #define AITBL_B (AIMAP_B + 16) /* * 1st extent of aggregate inode table */ #define SUPER2_B (AITBL_B + 32) /* 2ndary superblock pbn */ #define BMAP_B (SUPER2_B + 8) /* block allocation map */ /* * SIZE_OF_SUPER defines the total amount of space reserved on disk for the * superblock. This is not the same as the superblock structure, since all of * this space is not currently being used. */ #define SIZE_OF_SUPER PSIZE /* * SIZE_OF_AG_TABLE defines the amount of space reserved to hold the AG table */ #define SIZE_OF_AG_TABLE PSIZE /* * SIZE_OF_MAP_PAGE defines the amount of disk space reserved for each page of * the inode allocation map (to hold iag) */ #define SIZE_OF_MAP_PAGE PSIZE /* * fixed byte offset address */ #define SUPER1_OFF 0x8000 /* primary superblock */ #define AIMAP_OFF (SUPER1_OFF + SIZE_OF_SUPER) /* * Control page of aggregate inode map * followed by 1st extent of map */ #define AITBL_OFF (AIMAP_OFF + (SIZE_OF_MAP_PAGE << 1)) /* * 1st extent of aggregate inode table */ #define SUPER2_OFF (AITBL_OFF + INODE_EXTENT_SIZE) /* * secondary superblock */ #define BMAP_OFF (SUPER2_OFF + SIZE_OF_SUPER) /* * block allocation map */ /* * The following macro is used to indicate the number of reserved disk blocks at * the front of an aggregate, in terms of physical blocks. This value is * currently defined to be 32K. This turns out to be the same as the primary * superblock's address, since it directly follows the reserved blocks. */ #define AGGR_RSVD_BLOCKS SUPER1_B /* * The following macro is used to indicate the number of reserved bytes at the * front of an aggregate. This value is currently defined to be 32K. This * turns out to be the same as the primary superblock's byte offset, since it * directly follows the reserved blocks. */ #define AGGR_RSVD_BYTES SUPER1_OFF /* * The following macro defines the byte offset for the first inode extent in * the aggregate inode table. This allows us to find the self inode to find the * rest of the table. Currently this value is 44K. */ #define AGGR_INODE_TABLE_START AITBL_OFF /* * fixed reserved inode number */ /* aggregate inode */ #define AGGR_RESERVED_I 0 /* aggregate inode (reserved) */ #define AGGREGATE_I 1 /* aggregate inode map inode */ #define BMAP_I 2 /* aggregate block allocation map inode */ #define LOG_I 3 /* aggregate inline log inode */ #define BADBLOCK_I 4 /* aggregate bad block inode */ #define FILESYSTEM_I 16 /* 1st/only fileset inode in ait: * fileset inode map inode */ /* per fileset inode */ #define FILESET_RSVD_I 0 /* fileset inode (reserved) */ #define FILESET_EXT_I 1 /* fileset inode extension */ #define ROOT_I 2 /* fileset root inode */ #define ACL_I 3 /* fileset ACL inode */ #define FILESET_OBJECT_I 4 /* the first fileset inode available for a file * or directory or link... */ #define FIRST_FILESET_INO 16 /* the first aggregate inode which describes * an inode. (To fsck this is also the first * inode in part 2 of the agg inode table.) */ /* * directory configuration */ #define JFS_NAME_MAX 255 #define JFS_PATH_MAX BPSIZE /* * file system state (superblock state) */ #define FM_CLEAN 0x00000000 /* file system is unmounted and clean */ #define FM_MOUNT 0x00000001 /* file system is mounted cleanly */ #define FM_DIRTY 0x00000002 /* file system was not unmounted and clean * when mounted or * commit failure occurred while being mounted: * fsck() must be run to repair */ #define FM_LOGREDO 0x00000004 /* log based recovery (logredo()) failed: * fsck() must be run to repair */ #define FM_EXTENDFS 0x00000008 /* file system extendfs() in progress */ #define FM_STATE_MAX 0x0000000f /* max value of s_state */ #endif /* _H_JFS_FILSYS */ ] |
| 196/4 | 2026/03/16 16:17 | llm | codeexpert |
4mModel:gemini-3-flash-preview Tokens: input: 36631 output: 707 thoughts: 62912 |
| 197/2 | 2026/03/16 16:22 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 184799 output: 47 thoughts: 449 Thoughts: **Analyzing `metapage_write_one` and Related Functions**
Okay, so I'm looking at `metapage_write_one` and immediately see `ret = filemap_check_errors(mapping);`. The potential problem here is that if the `mapping` pointer is NULL, that `filemap_check_errors` call is going to cause a crash. Now, the code *shouldn't* allow this to happen, but I need to be sure.
I remember thinking about this before. I recall that `folio->mapping` is checked in both `release_metapage` and `force_metapage`. Since those functions are the ones responsible for ensuring that a valid mapping exists before `metapage_write_one` is called, it seems like `metapage_write_one` itself doesn't need modification. Good. We're probably safe there.
But wait, what about `remove_metapage`? Let me quickly review it:
```c
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
folio_detach_private(folio);
kunmap(&folio->page);
}
```
Okay, nothing immediately obvious jumps out. But now that I think about it, the CodeExpert brought up a concern about `remove_metapage` dereferencing `folio->mapping` when `MPS_PER_PAGE > 1`. That's a possibility. So, I need to investigate the behavior of `remove_metapage` more closely specifically considering the case of `MPS_PER_PAGE > 1`. I need to ensure that the code handles this scenario correctly, especially the potential interaction with the `folio->mapping` pointer.
|
| 198/2 | 2026/03/16 16:22 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:remove_metapage] Results: map[SourceCode: 222: static inline void remove_metapage(struct folio *folio, struct metapage *mp)
223: {
224: folio_detach_private(folio);
225: kunmap(&folio->page);
226: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 199/2 | 2026/03/16 16:22 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 185781 output: 19 thoughts: 41 Thoughts: Here's my take on the situation: **Double Definition Detected: `remove_metapage`** Okay, hold on a minute. Something's not right. I'm seeing a double definition of `remove_metapage` in the codebase. That's a classic source of headaches, especially when we're dealing with the core of a filesystem. Let's see... I'll need to confirm which one is actually being used and if there's any conflict. Given that `remove_metapage` seems like a function likely tied to the filesystem's metadata handling, the obvious place to start digging is `fs/jfs/jfs_metapage.c`. That's where I'd expect to find the authoritative definition of how metapages are managed within the JFS. Time to dive into that file and get to the bottom of this. We need to figure out which definition is the intended one and why the duplication exists. It could be an old function left around, a copy-paste error, a problem with include paths, or a whole bunch of other things. This needs a proper investigation. |
| 200/2 | 2026/03/16 16:22 | tool | grepper |
0mArgs:map[Expression:remove_metapage] Results: map[Output:fs/jfs/jfs_metapage.c=93=static inline int insert_metapage(struct folio *folio, struct metapage *mp)
--
fs/jfs/jfs_metapage.c-117-
fs/jfs/jfs_metapage.c:118:static inline void remove_metapage(struct folio *folio, struct metapage *mp)
fs/jfs/jfs_metapage.c-119-{
--
fs/jfs/jfs_metapage.c=156=static int __metapage_migrate_folio(struct address_space *mapping,
--
fs/jfs/jfs_metapage.c-182- if (mps[j])
fs/jfs/jfs_metapage.c:183: remove_metapage(dst, mps[j]);
fs/jfs/jfs_metapage.c-184- }
--
fs/jfs/jfs_metapage.c-197- mp->folio = dst;
fs/jfs/jfs_metapage.c:198: remove_metapage(src, mp);
fs/jfs/jfs_metapage.c-199- }
--
fs/jfs/jfs_metapage.c=213=static inline int insert_metapage(struct folio *folio, struct metapage *mp)
--
fs/jfs/jfs_metapage.c-221-
fs/jfs/jfs_metapage.c:222:static inline void remove_metapage(struct folio *folio, struct metapage *mp)
fs/jfs/jfs_metapage.c-223-{
--
fs/jfs/jfs_metapage.c=232=static int __metapage_migrate_folio(struct address_space *mapping,
--
fs/jfs/jfs_metapage.c-253- mp->folio = dst;
fs/jfs/jfs_metapage.c:254: remove_metapage(src, mp);
fs/jfs/jfs_metapage.c-255-
--
fs/jfs/jfs_metapage.c=309=static inline void drop_metapage(struct folio *folio, struct metapage *mp)
--
fs/jfs/jfs_metapage.c-313- return;
fs/jfs/jfs_metapage.c:314: remove_metapage(folio, mp);
fs/jfs/jfs_metapage.c-315- INCREMENT(mpStat.pagefree);
--
fs/jfs/jfs_metapage.c=612=static bool metapage_release_folio(struct folio *folio, gfp_t gfp_mask)
--
fs/jfs/jfs_metapage.c-633- remove_from_logsync(mp);
fs/jfs/jfs_metapage.c:634: remove_metapage(folio, mp);
fs/jfs/jfs_metapage.c-635- INCREMENT(mpStat.pagefree);
] |
| 201/2 | 2026/03/16 16:22 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 186901 output: 47 thoughts: 18 Thoughts: Here's my attempt at summarizing the thought process, tailored for an expert audience: **Analyzing the `remove_metapage` Function** Okay, let's zero in on `remove_metapage`, specifically at line 118. I need to understand its function and how it fits into the broader picture. Given the name, I'm expecting some kind of metapage cleanup, but the specifics are where the devil is. Is this a simple deletion, a more complex de-referencing, or something else entirely? Line 118 will likely be the core logic of the function, or a call to another function which handles the meat of this action. I’ll be looking for potential side effects, especially if this is operating on shared resources. Are there any concurrency considerations here? What about error handling? And does it correctly remove any associated data structures or references to the metapage? Time to dig in. |
| 202/2 | 2026/03/16 16:22 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:remove_metapage] Results: map[SourceCode: 222: static inline void remove_metapage(struct folio *folio, struct metapage *mp)
223: {
224: folio_detach_private(folio);
225: kunmap(&folio->page);
226: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 203/2 | 2026/03/16 16:22 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 187268 output: 28 thoughts: 15 Thoughts: Okay, here's my interpretation of that thought process, tailored for an expert audience: **Line-Specific Focus & Contextualization** Alright, let me quickly parse this. The prompt indicates that I need to zoom in on line 118 within the referenced file. That specific line is the critical data point here. I immediately begin to mentally construct the surrounding context of the file. What kind of file is it? What language is it? What does the code around line 118 reveal about its purpose? I assume there’s a relevant section surrounding that line, perhaps a critical calculation, a data access point, a conditional statement, or a function definition, that needs direct inspection. My primary focus is understanding *why* line 118 has been flagged as a focal point. |
| 204/2 | 2026/03/16 16:22 | tool | read-file |
0mArgs:map[File:fs/jfs/jfs_metapage.c] Results: map[Contents:// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright (C) International Business Machines Corp., 2000-2005
* Portions Copyright (C) Christoph Hellwig, 2001-2002
*/
#include <linux/blkdev.h>
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/bio.h>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/buffer_head.h>
#include <linux/mempool.h>
#include <linux/seq_file.h>
#include <linux/writeback.h>
#include <linux/migrate.h>
#include "jfs_incore.h"
#include "jfs_superblock.h"
#include "jfs_filsys.h"
#include "jfs_metapage.h"
#include "jfs_txnmgr.h"
#include "jfs_debug.h"
#ifdef CONFIG_JFS_STATISTICS
static struct {
uint pagealloc; /* # of page allocations */
uint pagefree; /* # of page frees */
uint lockwait; /* # of sleeping lock_metapage() calls */
} mpStat;
#endif
#define metapage_locked(mp) test_bit(META_locked, &(mp)->flag)
#define trylock_metapage(mp) test_and_set_bit_lock(META_locked, &(mp)->flag)
static inline void unlock_metapage(struct metapage *mp)
{
clear_bit_unlock(META_locked, &mp->flag);
wake_up(&mp->wait);
}
static inline void __lock_metapage(struct metapage *mp)
{
DECLARE_WAITQUEUE(wait, current);
INCREMENT(mpStat.lockwait);
add_wait_queue_exclusive(&mp->wait, &wait);
do {
set_current_state(TASK_UNINTERRUPTIBLE);
if (metapage_locked(mp)) {
folio_unlock(mp->folio);
io_schedule();
folio_lock(mp->folio);
}
} while (trylock_metapage(mp));
__set_current_state(TASK_RUNNING);
remove_wait_queue(&mp->wait, &wait);
}
/*
* Must have mp->folio locked
*/
static inline void lock_metapage(struct metapage *mp)
{
if (trylock_metapage(mp))
__lock_metapage(mp);
}
#define METAPOOL_MIN_PAGES 32
static struct kmem_cache *metapage_cache;
static mempool_t *metapage_mempool;
#define MPS_PER_PAGE (PAGE_SIZE >> L2PSIZE)
#if MPS_PER_PAGE > 1
struct meta_anchor {
int mp_count;
atomic_t io_count;
blk_status_t status;
struct metapage *mp[MPS_PER_PAGE];
};
static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
{
struct meta_anchor *anchor = folio->private;
if (!anchor)
return NULL;
return anchor->mp[offset >> L2PSIZE];
}
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a;
int index;
int l2mp_blocks; /* log2 blocks per metapage */
a = folio->private;
if (!a) {
a = kzalloc_obj(struct meta_anchor, GFP_NOFS);
if (!a)
return -ENOMEM;
folio_attach_private(folio, a);
kmap(&folio->page);
}
if (mp) {
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
a->mp_count++;
a->mp[index] = mp;
}
return 0;
}
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a = folio->private;
int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
int index;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
BUG_ON(a->mp[index] != mp);
a->mp[index] = NULL;
if (--a->mp_count == 0) {
kfree(a);
folio_detach_private(folio);
kunmap(&folio->page);
}
}
static inline void inc_io(struct folio *folio)
{
struct meta_anchor *anchor = folio->private;
atomic_inc(&anchor->io_count);
}
static inline void dec_io(struct folio *folio, blk_status_t status,
void (*handler)(struct folio *, blk_status_t))
{
struct meta_anchor *anchor = folio->private;
if (anchor->status == BLK_STS_OK)
anchor->status = status;
if (atomic_dec_and_test(&anchor->io_count))
handler(folio, anchor->status);
}
#ifdef CONFIG_MIGRATION
static int __metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
struct meta_anchor *src_anchor = src->private;
struct metapage *mps[MPS_PER_PAGE] = {0};
struct metapage *mp;
int i, rc;
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = src_anchor->mp[i];
if (mp && metapage_locked(mp))
return -EAGAIN;
}
rc = filemap_migrate_folio(mapping, dst, src, mode);
if (rc)
return rc;
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = src_anchor->mp[i];
if (!mp)
continue;
if (unlikely(insert_metapage(dst, mp))) {
/* If error, roll-back previosly inserted pages */
for (int j = 0 ; j < i; j++) {
if (mps[j])
remove_metapage(dst, mps[j]);
}
return -EAGAIN;
}
mps[i] = mp;
}
/* Update the metapage and remove it from src */
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = mps[i];
if (mp) {
int page_offset = mp->data - folio_address(src);
mp->data = folio_address(dst) + page_offset;
mp->folio = dst;
remove_metapage(src, mp);
}
}
return 0;
}
#endif /* CONFIG_MIGRATION */
#else
static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
{
return folio->private;
}
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
if (mp) {
folio_attach_private(folio, mp);
kmap(&folio->page);
}
return 0;
}
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
folio_detach_private(folio);
kunmap(&folio->page);
}
#define inc_io(folio) do {} while(0)
#define dec_io(folio, status, handler) handler(folio, status)
#ifdef CONFIG_MIGRATION
static int __metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
struct metapage *mp;
int page_offset;
int rc;
mp = folio_to_mp(src, 0);
if (metapage_locked(mp))
return -EAGAIN;
rc = filemap_migrate_folio(mapping, dst, src, mode);
if (rc)
return rc;
if (unlikely(insert_metapage(dst, mp)))
return -EAGAIN;
page_offset = mp->data - folio_address(src);
mp->data = folio_address(dst) + page_offset;
mp->folio = dst;
remove_metapage(src, mp);
return 0;
}
#endif /* CONFIG_MIGRATION */
#endif
static inline struct metapage *alloc_metapage(gfp_t gfp_mask)
{
struct metapage *mp = mempool_alloc(metapage_mempool, gfp_mask);
if (mp) {
mp->lid = 0;
mp->lsn = 0;
mp->data = NULL;
mp->clsn = 0;
mp->log = NULL;
init_waitqueue_head(&mp->wait);
}
return mp;
}
static inline void free_metapage(struct metapage *mp)
{
mempool_free(mp, metapage_mempool);
}
int __init metapage_init(void)
{
/*
* Allocate the metapage structures
*/
metapage_cache = kmem_cache_create("jfs_mp", sizeof(struct metapage),
0, 0, NULL);
if (metapage_cache == NULL)
return -ENOMEM;
metapage_mempool = mempool_create_slab_pool(METAPOOL_MIN_PAGES,
metapage_cache);
if (metapage_mempool == NULL) {
kmem_cache_destroy(metapage_cache);
return -ENOMEM;
}
return 0;
}
void metapage_exit(void)
{
mempool_destroy(metapage_mempool);
kmem_cache_destroy(metapage_cache);
}
static inline void drop_metapage(struct folio *folio, struct metapage *mp)
{
if (mp->count || mp->nohomeok || test_bit(META_dirty, &mp->flag) ||
test_bit(META_io, &mp->flag))
return;
remove_metapage(folio, mp);
INCREMENT(mpStat.pagefree);
free_metapage(mp);
}
/*
* Metapage address space operations
*/
static sector_t metapage_get_blocks(struct inode *inode, sector_t lblock,
int *len)
{
int rc = 0;
int xflag;
s64 xaddr;
sector_t file_blocks = (inode->i_size + inode->i_sb->s_blocksize - 1) >>
inode->i_blkbits;
if (lblock >= file_blocks)
return 0;
if (lblock + *len > file_blocks)
*len = file_blocks - lblock;
if (inode->i_ino) {
rc = xtLookup(inode, (s64)lblock, *len, &xflag, &xaddr, len, 0);
if ((rc == 0) && *len)
lblock = (sector_t)xaddr;
else
lblock = 0;
} /* else no mapping */
return lblock;
}
static void last_read_complete(struct folio *folio, blk_status_t status)
{
if (status)
printk(KERN_ERR "Read error %d at %#llx\n", status,
folio_pos(folio));
folio_end_read(folio, status == 0);
}
static void metapage_read_end_io(struct bio *bio)
{
struct folio *folio = bio->bi_private;
dec_io(folio, bio->bi_status, last_read_complete);
bio_put(bio);
}
static void remove_from_logsync(struct metapage *mp)
{
struct jfs_log *log = mp->log;
unsigned long flags;
/*
* This can race. Recheck that log hasn't been set to null, and after
* acquiring logsync lock, recheck lsn
*/
if (!log)
return;
LOGSYNC_LOCK(log, flags);
if (mp->lsn) {
mp->log = NULL;
mp->lsn = 0;
mp->clsn = 0;
log->count--;
list_del(&mp->synclist);
}
LOGSYNC_UNLOCK(log, flags);
}
static void last_write_complete(struct folio *folio, blk_status_t status)
{
struct metapage *mp;
unsigned int offset;
if (status) {
int err = blk_status_to_errno(status);
printk(KERN_ERR "metapage_write_end_io: I/O error\n");
mapping_set_error(folio->mapping, err);
}
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (mp && test_bit(META_io, &mp->flag)) {
if (mp->lsn)
remove_from_logsync(mp);
clear_bit(META_io, &mp->flag);
}
/*
* I'd like to call drop_metapage here, but I don't think it's
* safe unless I have the page locked
*/
}
folio_end_writeback(folio);
}
static void metapage_write_end_io(struct bio *bio)
{
struct folio *folio = bio->bi_private;
BUG_ON(!folio->private);
dec_io(folio, bio->bi_status, last_write_complete);
bio_put(bio);
}
static int metapage_write_folio(struct folio *folio,
struct writeback_control *wbc)
{
struct bio *bio = NULL;
int block_offset; /* block offset of mp within page */
struct inode *inode = folio->mapping->host;
int blocks_per_mp = JFS_SBI(inode->i_sb)->nbperpage;
int len;
int xlen;
struct metapage *mp;
int redirty = 0;
sector_t lblock;
int nr_underway = 0;
sector_t pblock;
sector_t next_block = 0;
sector_t page_start;
unsigned long bio_bytes = 0;
unsigned long bio_offset = 0;
int offset;
int bad_blocks = 0;
page_start = folio_pos(folio) >> inode->i_blkbits;
BUG_ON(!folio_test_locked(folio));
BUG_ON(folio_test_writeback(folio));
folio_start_writeback(folio);
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp || !test_bit(META_dirty, &mp->flag))
continue;
if (mp->nohomeok && !test_bit(META_forcewrite, &mp->flag)) {
redirty = 1;
/*
* Make sure this page isn't blocked indefinitely.
* If the journal isn't undergoing I/O, push it
*/
if (mp->log && !(mp->log->cflag & logGC_PAGEOUT))
jfs_flush_journal(mp->log, 0);
continue;
}
clear_bit(META_dirty, &mp->flag);
set_bit(META_io, &mp->flag);
block_offset = offset >> inode->i_blkbits;
lblock = page_start + block_offset;
if (bio) {
if (xlen && lblock == next_block) {
/* Contiguous, in memory & on disk */
len = min(xlen, blocks_per_mp);
xlen -= len;
bio_bytes += len << inode->i_blkbits;
continue;
}
/* Not contiguous */
bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
/*
* Increment counter before submitting i/o to keep
* count from hitting zero before we're through
*/
inc_io(folio);
if (!bio->bi_iter.bi_size)
goto dump_bio;
submit_bio(bio);
nr_underway++;
bio = NULL;
} else
inc_io(folio);
xlen = (folio_size(folio) - offset) >> inode->i_blkbits;
pblock = metapage_get_blocks(inode, lblock, &xlen);
if (!pblock) {
printk(KERN_ERR "JFS: metapage_get_blocks failed\n");
/*
* We already called inc_io(), but can't cancel it
* with dec_io() until we're done with the page
*/
bad_blocks++;
continue;
}
len = min(xlen, (int)JFS_SBI(inode->i_sb)->nbperpage);
bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_WRITE, GFP_NOFS);
bio->bi_iter.bi_sector = pblock << (inode->i_blkbits - 9);
bio->bi_end_io = metapage_write_end_io;
bio->bi_private = folio;
/* Don't call bio_add_page yet, we may add to this vec */
bio_offset = offset;
bio_bytes = len << inode->i_blkbits;
xlen -= len;
next_block = lblock + len;
}
if (bio) {
bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
if (!bio->bi_iter.bi_size)
goto dump_bio;
submit_bio(bio);
nr_underway++;
}
if (redirty)
folio_redirty_for_writepage(wbc, folio);
folio_unlock(folio);
if (bad_blocks)
goto err_out;
if (nr_underway == 0)
folio_end_writeback(folio);
return 0;
dump_bio:
print_hex_dump(KERN_ERR, "JFS: dump of bio: ", DUMP_PREFIX_ADDRESS, 16,
4, bio, sizeof(*bio), 0);
bio_put(bio);
folio_unlock(folio);
dec_io(folio, BLK_STS_OK, last_write_complete);
err_out:
while (bad_blocks--)
dec_io(folio, BLK_STS_OK, last_write_complete);
return -EIO;
}
static int metapage_writepages(struct address_space *mapping,
struct writeback_control *wbc)
{
struct blk_plug plug;
struct folio *folio = NULL;
int err;
blk_start_plug(&plug);
while ((folio = writeback_iter(mapping, wbc, folio, &err)))
err = metapage_write_folio(folio, wbc);
blk_finish_plug(&plug);
return err;
}
static int metapage_read_folio(struct file *fp, struct folio *folio)
{
struct inode *inode = folio->mapping->host;
struct bio *bio = NULL;
int block_offset;
int blocks_per_page = i_blocks_per_folio(inode, folio);
sector_t page_start; /* address of page in fs blocks */
sector_t pblock;
int xlen;
unsigned int len;
int offset;
BUG_ON(!folio_test_locked(folio));
page_start = folio_pos(folio) >> inode->i_blkbits;
block_offset = 0;
while (block_offset < blocks_per_page) {
xlen = blocks_per_page - block_offset;
pblock = metapage_get_blocks(inode, page_start + block_offset,
&xlen);
if (pblock) {
if (!folio->private)
insert_metapage(folio, NULL);
inc_io(folio);
if (bio)
submit_bio(bio);
bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_READ,
GFP_NOFS);
bio->bi_iter.bi_sector =
pblock << (inode->i_blkbits - 9);
bio->bi_end_io = metapage_read_end_io;
bio->bi_private = folio;
len = xlen << inode->i_blkbits;
offset = block_offset << inode->i_blkbits;
bio_add_folio_nofail(bio, folio, len, offset);
block_offset += xlen;
} else
block_offset++;
}
if (bio)
submit_bio(bio);
else
folio_unlock(folio);
return 0;
}
static bool metapage_release_folio(struct folio *folio, gfp_t gfp_mask)
{
struct metapage *mp;
bool ret = true;
int offset;
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp)
continue;
jfs_info("metapage_release_folio: mp = 0x%p", mp);
if (mp->count || mp->nohomeok ||
test_bit(META_dirty, &mp->flag)) {
jfs_info("count = %ld, nohomeok = %d", mp->count,
mp->nohomeok);
ret = false;
continue;
}
if (mp->lsn)
remove_from_logsync(mp);
remove_metapage(folio, mp);
INCREMENT(mpStat.pagefree);
free_metapage(mp);
}
return ret;
}
#ifdef CONFIG_MIGRATION
/*
* metapage_migrate_folio - Migration function for JFS metapages
*/
static int metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
int expected_count;
if (!src->private)
return filemap_migrate_folio(mapping, dst, src, mode);
/* Check whether page does not have extra refs before we do more work */
expected_count = folio_expected_ref_count(src) + 1;
if (folio_ref_count(src) != expected_count)
return -EAGAIN;
return __metapage_migrate_folio(mapping, dst, src, mode);
}
#else
#define metapage_migrate_folio NULL
#endif /* CONFIG_MIGRATION */
static void metapage_invalidate_folio(struct folio *folio, size_t offset,
size_t length)
{
BUG_ON(offset || length < folio_size(folio));
BUG_ON(folio_test_writeback(folio));
metapage_release_folio(folio, 0);
}
const struct address_space_operations jfs_metapage_aops = {
.read_folio = metapage_read_folio,
.writepages = metapage_writepages,
.release_folio = metapage_release_folio,
.invalidate_folio = metapage_invalidate_folio,
.dirty_folio = filemap_dirty_folio,
.migrate_folio = metapage_migrate_folio,
};
struct metapage *__get_metapage(struct inode *inode, unsigned long lblock,
unsigned int size, int absolute,
unsigned long new)
{
int l2BlocksPerPage;
int l2bsize;
struct address_space *mapping;
struct metapage *mp = NULL;
struct folio *folio;
unsigned long page_index;
unsigned long page_offset;
jfs_info("__get_metapage: ino = %ld, lblock = 0x%lx, abs=%d",
inode->i_ino, lblock, absolute);
l2bsize = inode->i_blkbits;
l2BlocksPerPage = PAGE_SHIFT - l2bsize;
page_index = lblock >> l2BlocksPerPage;
page_offset = (lblock - (page_index << l2BlocksPerPage)) << l2bsize;
if ((page_offset + size) > PAGE_SIZE) {
jfs_err("MetaData crosses page boundary!!");
jfs_err("lblock = %lx, size = %d", lblock, size);
dump_stack();
return NULL;
}
if (absolute)
mapping = JFS_SBI(inode->i_sb)->direct_inode->i_mapping;
else {
/*
* If an nfs client tries to read an inode that is larger
* than any existing inodes, we may try to read past the
* end of the inode map
*/
if ((lblock << inode->i_blkbits) >= inode->i_size)
return NULL;
mapping = inode->i_mapping;
}
if (new && (PSIZE == PAGE_SIZE)) {
folio = filemap_grab_folio(mapping, page_index);
if (IS_ERR(folio)) {
jfs_err("filemap_grab_folio failed!");
return NULL;
}
folio_mark_uptodate(folio);
} else {
folio = read_mapping_folio(mapping, page_index, NULL);
if (IS_ERR(folio)) {
jfs_err("read_mapping_page failed!");
return NULL;
}
folio_lock(folio);
}
mp = folio_to_mp(folio, page_offset);
if (mp) {
if (mp->logical_size != size) {
jfs_error(inode->i_sb,
"get_mp->logical_size != size\n");
jfs_err("logical_size = %d, size = %d",
mp->logical_size, size);
dump_stack();
goto unlock;
}
mp->count++;
lock_metapage(mp);
if (test_bit(META_discard, &mp->flag)) {
if (!new) {
jfs_error(inode->i_sb,
"using a discarded metapage\n");
discard_metapage(mp);
goto unlock;
}
clear_bit(META_discard, &mp->flag);
}
} else {
INCREMENT(mpStat.pagealloc);
mp = alloc_metapage(GFP_NOFS);
if (!mp)
goto unlock;
mp->folio = folio;
mp->sb = inode->i_sb;
mp->flag = 0;
mp->xflag = COMMIT_PAGE;
mp->count = 1;
mp->nohomeok = 0;
mp->logical_size = size;
mp->data = folio_address(folio) + page_offset;
mp->index = lblock;
if (unlikely(insert_metapage(folio, mp))) {
free_metapage(mp);
goto unlock;
}
lock_metapage(mp);
}
if (new) {
jfs_info("zeroing mp = 0x%p", mp);
memset(mp->data, 0, PSIZE);
}
folio_unlock(folio);
jfs_info("__get_metapage: returning = 0x%p data = 0x%p", mp, mp->data);
return mp;
unlock:
folio_unlock(folio);
return NULL;
}
void grab_metapage(struct metapage * mp)
{
jfs_info("grab_metapage: mp = 0x%p", mp);
folio_get(mp->folio);
folio_lock(mp->folio);
mp->count++;
lock_metapage(mp);
folio_unlock(mp->folio);
}
static int metapage_write_one(struct folio *folio)
{
struct address_space *mapping = folio->mapping;
struct writeback_control wbc = {
.sync_mode = WB_SYNC_ALL,
.nr_to_write = folio_nr_pages(folio),
};
int ret = 0;
BUG_ON(!folio_test_locked(folio));
folio_wait_writeback(folio);
if (folio_clear_dirty_for_io(folio)) {
folio_get(folio);
ret = metapage_write_folio(folio, &wbc);
if (ret == 0)
folio_wait_writeback(folio);
folio_put(folio);
} else {
folio_unlock(folio);
}
if (!ret)
ret = filemap_check_errors(mapping);
return ret;
}
void force_metapage(struct metapage *mp)
{
struct folio *folio = mp->folio;
jfs_info("force_metapage: mp = 0x%p", mp);
set_bit(META_forcewrite, &mp->flag);
clear_bit(META_sync, &mp->flag);
folio_get(folio);
folio_lock(folio);
folio_mark_dirty(folio);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
clear_bit(META_forcewrite, &mp->flag);
folio_put(folio);
}
void hold_metapage(struct metapage *mp)
{
folio_lock(mp->folio);
}
void put_metapage(struct metapage *mp)
{
if (mp->count || mp->nohomeok) {
/* Someone else will release this */
folio_unlock(mp->folio);
return;
}
folio_get(mp->folio);
mp->count++;
lock_metapage(mp);
folio_unlock(mp->folio);
release_metapage(mp);
}
void release_metapage(struct metapage * mp)
{
struct folio *folio = mp->folio;
jfs_info("release_metapage: mp = 0x%p, flag = 0x%lx", mp, mp->flag);
folio_lock(folio);
unlock_metapage(mp);
assert(mp->count);
if (--mp->count || mp->nohomeok) {
folio_unlock(folio);
folio_put(folio);
return;
}
if (test_bit(META_dirty, &mp->flag)) {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
} else if (mp->lsn) /* discard_metapage doesn't remove it */
remove_from_logsync(mp);
/* Try to keep metapages from using up too much memory */
drop_metapage(folio, mp);
folio_unlock(folio);
folio_put(folio);
}
void __invalidate_metapages(struct inode *ip, s64 addr, int len)
{
sector_t lblock;
int l2BlocksPerPage = PAGE_SHIFT - ip->i_blkbits;
int BlocksPerPage = 1 << l2BlocksPerPage;
/* All callers are interested in block device's mapping */
struct address_space *mapping =
JFS_SBI(ip->i_sb)->direct_inode->i_mapping;
struct metapage *mp;
unsigned int offset;
/*
* Mark metapages to discard. They will eventually be
* released, but should not be written.
*/
for (lblock = addr & ~(BlocksPerPage - 1); lblock < addr + len;
lblock += BlocksPerPage) {
struct folio *folio = filemap_lock_folio(mapping,
lblock >> l2BlocksPerPage);
if (IS_ERR(folio))
continue;
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp)
continue;
if (mp->index < addr)
continue;
if (mp->index >= addr + len)
break;
clear_bit(META_dirty, &mp->flag);
set_bit(META_discard, &mp->flag);
if (mp->lsn)
remove_from_logsync(mp);
}
folio_unlock(folio);
folio_put(folio);
}
}
#ifdef CONFIG_JFS_STATISTICS
int jfs_mpstat_proc_show(struct seq_file *m, void *v)
{
seq_printf(m,
"JFS Metapage statistics\n"
"=======================\n"
"page allocations = %d\n"
"page frees = %d\n"
"lock waits = %d\n",
mpStat.pagealloc,
mpStat.pagefree,
mpStat.lockwait);
return 0;
}
#endif
] |
| 205/2 | 2026/03/16 16:22 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 196061 output: 46 thoughts: 1028 Thoughts: **Analyzing a Potential Kernel Crash** Okay, so I've been looking at this code, specifically the `remove_metapage` function. I see it's defined twice depending on whether `MPS_PER_PAGE` is greater than 1 or equal to 1. My initial thought was focused on the case where `MPS_PER_PAGE > 1`, as that's where I spotted a potential vulnerability: the dereference of `folio->mapping->host->i_blkbits`. If `folio->mapping` is NULL, this clearly leads to a crash. I then ran some quick mental math, remembering that `MPS_PER_PAGE` is calculated as `PAGE_SIZE >> L2PSIZE`, where `L2PSIZE` is 12 (or 4096 bytes). That means, if `PAGE_SIZE` is 4096, `MPS_PER_PAGE` is 1, but if `PAGE_SIZE` is 64K, `MPS_PER_PAGE` is 16, which means this crash is possible. My immediate reaction was to fix it by using `mp->sb->s_blocksize_bits` instead, since `mp->sb` *is* available. It seems much safer, but, wait, `i_blkbits` is the same as `s_blocksize_bits` for JFS (which only supports 4096 block sizes, as far as I recall). However, let's just make sure, and check `insert_metapage` and the usage there too. If we replace `folio->mapping->host->i_blkbits` with `mp->sb->s_blocksize_bits` there as well, it will avoid the null pointer dereference. This would be a clean and solid solution, changing the problematic line. But now, I remember that we are debugging on x86_64 where `PAGE_SIZE` is 4096. This means `MPS_PER_PAGE` is 1, so the `remove_metapage` being used is actually the `#else` version: which doesn't dereference `folio->mapping`. This means the crash is *not* in `remove_metapage` directly! The root cause must be somewhere else. I will mention the `remove_metapage` vulnerability as an additional note, but the immediate solution for the crash we are debugging is in `release_metapage` and `force_metapage`. |
| 206/2 | 2026/03/16 16:23 | tool | codeexpert |
3mArgs:map[Question:Is `mp->sb->s_blocksize_bits` always equal to `folio->mapping->host->i_blkbits` in JFS?] Results: map[Answer:In JFS (Journaled File System), the relationship between `mp->sb->s_blocksize_bits` and `folio->mapping->host->i_blkbits` is maintained such that they are **always equal**.
Here is the detailed reasoning based on the kernel source code:
1. **Fixed Block Size in JFS**: JFS is designed to work with a fixed block size of 4KB (`PSIZE`). During the mount process, the function `jfs_fill_super` (in `fs/jfs/super.c`) calls `sb_set_blocksize(sb, PSIZE)`, which sets `sb->s_blocksize` to 4096 and `sb->s_blocksize_bits` to 12. The function `chkSuper` (in `fs/jfs/jfs_mount.c`) explicitly validates that the on-disk block size matches `PSIZE`:
```c
bsize = le32_to_cpu(j_sb->s_bsize);
if (bsize != PSIZE) {
jfs_err("Only 4K block size supported!");
return -EINVAL;
}
```
2. **Inode Initialization**: Every inode in JFS (including the special `direct_inode` used for metadata and inodes for regular files/directories) is initialized using `new_inode(sb)` or `iget_locked(sb, ino)`. These functions call `inode_init_always`, which sets `inode->i_blkbits = sb->s_blocksize_bits`. Since JFS never changes `sb->s_blocksize_bits` after mount and does not modify `i_blkbits` for its inodes, all JFS inodes have `i_blkbits` equal to 12.
3. **Metapage Association**: A `metapage` (`mp`) is always associated with a `folio` and a `super_block`. In `__get_metapage` (in `fs/jfs/jfs_metapage.c`), when a new metapage is allocated:
```c
mp->folio = folio;
mp->sb = inode->i_sb;
```
The `folio` belongs to the `mapping` of either the metadata `direct_inode` or a specific file/directory `inode`. In both cases, `folio->mapping->host` is a JFS inode whose `i_sb` is the same as `mp->sb`.
4. **Consistency**: Since `mp->sb` is the same as `folio->mapping->host->i_sb`, and all JFS inodes have their `i_blkbits` set to the superblock's `s_blocksize_bits` at creation and never changed, the two values remain identical.
Even on systems with larger page sizes (e.g., 64KB), JFS still operates with 4KB blocks. In such cases, `sb->s_blocksize_bits` remains 12, and `inode->i_blkbits` also remains 12, even though `PAGE_SHIFT` would be 16. The metapage management code specifically handles the case where multiple metapages reside within a single folio using these bit shifts.
Conclusion: **Yes**, `mp->sb->s_blocksize_bits` is always equal to `folio->mapping->host->i_blkbits` in JFS.] |
| 207/3 | 2026/03/16 16:23 | agent | codeexpert |
3mModel:gemini-3-flash-preview Instruction: You are a capable Linux kernel developer tasked with researching complex questions about kernel source code. You will be given a concrete question, and need to provide a concrete answer. Use tools extensively while researching the question. Don't make assumptions, or rely on your previous knowledge about the kernel source code, use available tools to access the actual source code. Use all available sources of information: - kernel source code - documentation in the Documentation dir in the source tree - git commits descriptions, git blame Prefer calling several tools at the same time to save round-trips. Prompt: Is `mp->sb->s_blocksize_bits` always equal to `folio->mapping->host->i_blkbits` in JFS? Reply: In JFS (Journaled File System), the relationship between `mp->sb->s_blocksize_bits` and `folio->mapping->host->i_blkbits` is maintained such that they are **always equal**.
Here is the detailed reasoning based on the kernel source code:
1. **Fixed Block Size in JFS**: JFS is designed to work with a fixed block size of 4KB (`PSIZE`). During the mount process, the function `jfs_fill_super` (in `fs/jfs/super.c`) calls `sb_set_blocksize(sb, PSIZE)`, which sets `sb->s_blocksize` to 4096 and `sb->s_blocksize_bits` to 12. The function `chkSuper` (in `fs/jfs/jfs_mount.c`) explicitly validates that the on-disk block size matches `PSIZE`:
```c
bsize = le32_to_cpu(j_sb->s_bsize);
if (bsize != PSIZE) {
jfs_err("Only 4K block size supported!");
return -EINVAL;
}
```
2. **Inode Initialization**: Every inode in JFS (including the special `direct_inode` used for metadata and inodes for regular files/directories) is initialized using `new_inode(sb)` or `iget_locked(sb, ino)`. These functions call `inode_init_always`, which sets `inode->i_blkbits = sb->s_blocksize_bits`. Since JFS never changes `sb->s_blocksize_bits` after mount and does not modify `i_blkbits` for its inodes, all JFS inodes have `i_blkbits` equal to 12.
3. **Metapage Association**: A `metapage` (`mp`) is always associated with a `folio` and a `super_block`. In `__get_metapage` (in `fs/jfs/jfs_metapage.c`), when a new metapage is allocated:
```c
mp->folio = folio;
mp->sb = inode->i_sb;
```
The `folio` belongs to the `mapping` of either the metadata `direct_inode` or a specific file/directory `inode`. In both cases, `folio->mapping->host` is a JFS inode whose `i_sb` is the same as `mp->sb`.
4. **Consistency**: Since `mp->sb` is the same as `folio->mapping->host->i_sb`, and all JFS inodes have their `i_blkbits` set to the superblock's `s_blocksize_bits` at creation and never changed, the two values remain identical.
Even on systems with larger page sizes (e.g., 64KB), JFS still operates with 4KB blocks. In such cases, `sb->s_blocksize_bits` remains 12, and `inode->i_blkbits` also remains 12, even though `PAGE_SHIFT` would be 16. The metapage management code specifically handles the case where multiple metapages reside within a single folio using these bit shifts.
Conclusion: **Yes**, `mp->sb->s_blocksize_bits` is always equal to `folio->mapping->host->i_blkbits` in JFS. |
| 208/4 | 2026/03/16 16:23 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 2284 output: 22 thoughts: 140 Thoughts: **Investigating JFS Block Size Consistency** Okay, so the question before me is about the JFS filesystem and a potential consistency issue. Specifically, I need to figure out if `mp->sb->s_blocksize_bits` is *always* equivalent to `folio->mapping->host->i_blkbits`. The user seems to be hinting that there might be a discrepancy lurking, and as an expert, I need to understand this fully. My initial thoughts are to break this down. First, I have to clarify the context. What is `mp` here? Based on my experience, it likely refers to a metapage structure within JFS. Let's confirm that by digging into the JFS source code and finding the definition of `metapage`. Next, I need to see how these two values are related – or *if* they're related at all. I have to find where `mp->sb` and `folio->mapping->host->i_blkbits` are used in conjunction. Are they initialized in the same function? Is one derived from the other? Or are they set independently, potentially opening the door for inconsistency? The real crux of the matter is identifying any scenarios where these values *could* diverge. Are there any operations, like resizing a filesystem, that could potentially lead to `mp->sb->s_blocksize_bits` and `folio->mapping->host->i_blkbits` being different? This is the core investigation. I'm going to start by tracing the initialization and usage of these variables within the JFS codebase to either confirm or refute the hypothesis of a potential bug. |
| 209/4 | 2026/03/16 16:23 | tool | codesearch-dir-index |
0mArgs:map[Dir:fs/jfs] Results: map[Files:[acl.c file.c inode.c ioctl.c jfs_acl.h jfs_btree.h jfs_debug.c jfs_debug.h jfs_dinode.h jfs_discard.c jfs_discard.h jfs_dmap.c jfs_dmap.h jfs_dtree.c jfs_dtree.h jfs_extent.c jfs_extent.h jfs_filsys.h jfs_imap.c jfs_imap.h jfs_incore.h jfs_inode.c jfs_inode.h jfs_lock.h jfs_logmgr.c jfs_logmgr.h jfs_metapage.c jfs_metapage.h jfs_mount.c jfs_superblock.h jfs_txnmgr.c jfs_txnmgr.h jfs_types.h jfs_umount.c jfs_unicode.c jfs_unicode.h jfs_xattr.h jfs_xtree.c jfs_xtree.h namei.c resize.c super.c symlink.c xattr.c] Subdirs:<nil>] |
| 210/4 | 2026/03/16 16:23 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 3198 output: 32 thoughts: 29 Thoughts: Okay, here's my summary, presented as a stream of thought: **Defining the `metapage`** Right, let's get down to brass tacks. I need to understand what's meant by "metapage" in this context. Given that we're talking about the JFS filesystem, the most logical place to start digging is the header file where this term would be defined. So, I'll head over to `fs/jfs/jfs_metapage.h`. That's where I expect to find the authoritative definition, at the very least. From there, I can analyze the structure and the related functions. |
| 211/4 | 2026/03/16 16:23 | tool | codesearch-file-index |
0mArgs:map[SourceFile:fs/jfs/jfs_metapage.h] Results: map[Entities:[map[Kind:function Name:_metapage_homeok] map[Kind:function Name:discard_metapage] map[Kind:function Name:flush_metapage] map[Kind:function Name:metapage_homeok] map[Kind:function Name:metapage_nohomeok] map[Kind:function Name:metapage_wait_for_io] map[Kind:function Name:write_metapage] map[Kind:struct Name:metapage]]] |
| 212/4 | 2026/03/16 16:23 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 3548 output: 45 thoughts: 12 Thoughts: Okay, here's my attempt at summarizing the thought process, tailored for an expert audience, and written in the first person: **Examining the `struct metapage` Definition** Alright, let's dive right in. My initial thought is to refresh my memory and pinpoint the exact definition of `struct metapage`. I know I've worked with it before, but with so many layers in this architecture, I want to be absolutely certain I'm on the same page. No point in proceeding until I have the foundation precisely in mind. So, the first step is to pull up that definition and quickly review the relevant fields, potential alignment issues, and implicit assumptions it makes. I need to understand its role in memory management, especially in relation to the underlying physical pages and any metadata it carries, like reference counts, flags, or protection information. Anything in there that could affect performance or introduce subtle bugs has to be immediately obvious before I go any further. I'll be looking for potential optimization opportunities or, conversely, areas where it might be overly complex and prone to errors. Time to get a solid grasp of this critical data structure. |
| 213/4 | 2026/03/16 16:23 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.h IncludeLines:true Name:metapage] Results: map[SourceCode: 11: struct metapage {
12: /* Common logsyncblk prefix (see jfs_logmgr.h) */
13: u16 xflag;
14: u16 unused;
15: lid_t lid;
16: int lsn;
17: struct list_head synclist;
18: /* End of logsyncblk prefix */
19:
20: unsigned long flag; /* See Below */
21: unsigned long count; /* Reference count */
22: void *data; /* Data pointer */
23: sector_t index; /* block address of page */
24: wait_queue_head_t wait;
25:
26: /* implementation */
27: struct folio *folio;
28: struct super_block *sb;
29: unsigned int logical_size;
30:
31: /* Journal management */
32: int clsn;
33: int nohomeok;
34: struct jfs_log *log;
35: };
SourceFile:fs/jfs/jfs_metapage.h] |
| 214/4 | 2026/03/16 16:23 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 4174 output: 32 thoughts: 139 Thoughts: Okay, here's my take on this – diving into the JFS metapage structure. **Diving into JFS Metapage Consistency** Alright, so I'm looking at this `struct metapage`. I see it's got a `struct folio *folio` and a `struct super_block *sb`. The immediate question that pops into my head, given my familiarity with these components, is: are the block size bits consistent between the superblock (`mp->sb->s_blocksize_bits`) and the inode associated with the folio (`mp->folio->mapping->host->i_blkbits`)? That's the crux of it. I'm thinking about metadata handling in JFS, and I know metapages are fundamental there. Specifically, these are linked to a folio, which in turn lives within an address space or "mapping". That "mapping" is rooted to a particular inode or "host". This all points to how JFS represents metadata on disk. I'm pretty sure `fs/jfs/jfs_metapage.c` would be the place to look for the creation and initialization of these metapages to see the details of how they are formed. I need to trace this back. The consistency of these fields is absolutely critical. I need to track down how these metapages are constructed. |
| 215/4 | 2026/03/16 16:23 | tool | codesearch-file-index |
0mArgs:map[SourceFile:fs/jfs/jfs_metapage.c] Results: map[Entities:[map[Kind:function Name:__get_metapage] map[Kind:function Name:__invalidate_metapages] map[Kind:function Name:__lock_metapage] map[Kind:function Name:__metapage_migrate_folio] map[Kind:function Name:alloc_metapage] map[Kind:function Name:drop_metapage] map[Kind:function Name:folio_to_mp] map[Kind:function Name:force_metapage] map[Kind:function Name:free_metapage] map[Kind:function Name:grab_metapage] map[Kind:function Name:hold_metapage] map[Kind:function Name:insert_metapage] map[Kind:function Name:last_read_complete] map[Kind:function Name:last_write_complete] map[Kind:function Name:lock_metapage] map[Kind:function Name:metapage_exit] map[Kind:function Name:metapage_get_blocks] map[Kind:function Name:metapage_init] map[Kind:function Name:metapage_invalidate_folio] map[Kind:function Name:metapage_migrate_folio] map[Kind:function Name:metapage_read_end_io] map[Kind:function Name:metapage_read_folio] map[Kind:function Name:metapage_release_folio] map[Kind:function Name:metapage_write_end_io] map[Kind:function Name:metapage_write_folio] map[Kind:function Name:metapage_write_one] map[Kind:function Name:metapage_writepages] map[Kind:function Name:put_metapage] map[Kind:function Name:release_metapage] map[Kind:function Name:remove_from_logsync] map[Kind:function Name:remove_metapage] map[Kind:function Name:unlock_metapage] map[Kind:global_variable Name:jfs_metapage_aops]]] |
| 216/4 | 2026/03/16 16:23 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 5203 output: 48 thoughts: 21 Thoughts: Okay, let's dive into this. Here's how I'm thinking about it, focusing on `__get_metapage` and `alloc_metapage`: **Dissecting Metapage Allocation** Right, so we're talking metapages. My initial focus is naturally drawn to `__get_metapage` and `alloc_metapage`. I know these are crucial, fundamental operations within memory management. `alloc_metapage` – that screams allocation, obviously. I'm immediately wondering about the underlying allocator, its strategies (buddy, slab, etc.), and how it's integrated into the larger framework. Is it a direct kernel-level allocation, or are we dealing with some abstraction layer? What metadata is associated with a metapage? That's what I'm keen to understand. Then there's `__get_metapage`. The double underscore prefix is a clue... it hints at an internal, perhaps protected or low-level function. What's the context for its use? Is this a cache lookup, a direct retrieval from a specific pool, or something else entirely? I'm already mentally forming a flowchart: allocation -> potential caching -> retrieval -> use. The flow is probably more nuanced. I need to know how these functions interact. What data structures are they manipulating? Are there mutexes or spinlocks involved, and if so, where are they employed to handle contention? I'm going to watch out for potential race conditions. Performance is obviously a concern. Are there any likely bottlenecks in the metapage lifecycle, and how are these functions optimized to avoid them? I’ll be thinking about cache coherency too, especially if we’re in a multi-processor environment. I'm expecting some kind of reference counting or other scheme to manage the lifetime of these metapages to handle all that correctly. I want to look for error handling. What happens if allocation fails? Does it return an error code, panic, or attempt some kind of graceful degradation? How are memory leaks handled, if at all? These functions are critical, so robust error handling is paramount. Basically, I want to understand *exactly* how these two functions work together to ensure efficient, safe, and reliable metapage management. That's the core of my mental model. |
| 217/4 | 2026/03/16 16:23 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:__get_metapage] Results: map[SourceCode: 683: struct metapage *__get_metapage(struct inode *inode, unsigned long lblock,
684: unsigned int size, int absolute,
685: unsigned long new)
686: {
687: int l2BlocksPerPage;
688: int l2bsize;
689: struct address_space *mapping;
690: struct metapage *mp = NULL;
691: struct folio *folio;
692: unsigned long page_index;
693: unsigned long page_offset;
694:
695: jfs_info("__get_metapage: ino = %ld, lblock = 0x%lx, abs=%d",
696: inode->i_ino, lblock, absolute);
697:
698: l2bsize = inode->i_blkbits;
699: l2BlocksPerPage = PAGE_SHIFT - l2bsize;
700: page_index = lblock >> l2BlocksPerPage;
701: page_offset = (lblock - (page_index << l2BlocksPerPage)) << l2bsize;
702: if ((page_offset + size) > PAGE_SIZE) {
703: jfs_err("MetaData crosses page boundary!!");
704: jfs_err("lblock = %lx, size = %d", lblock, size);
705: dump_stack();
706: return NULL;
707: }
708: if (absolute)
709: mapping = JFS_SBI(inode->i_sb)->direct_inode->i_mapping;
710: else {
711: /*
712: * If an nfs client tries to read an inode that is larger
713: * than any existing inodes, we may try to read past the
714: * end of the inode map
715: */
716: if ((lblock << inode->i_blkbits) >= inode->i_size)
717: return NULL;
718: mapping = inode->i_mapping;
719: }
720:
721: if (new && (PSIZE == PAGE_SIZE)) {
722: folio = filemap_grab_folio(mapping, page_index);
723: if (IS_ERR(folio)) {
724: jfs_err("filemap_grab_folio failed!");
725: return NULL;
726: }
727: folio_mark_uptodate(folio);
728: } else {
729: folio = read_mapping_folio(mapping, page_index, NULL);
730: if (IS_ERR(folio)) {
731: jfs_err("read_mapping_page failed!");
732: return NULL;
733: }
734: folio_lock(folio);
735: }
736:
737: mp = folio_to_mp(folio, page_offset);
738: if (mp) {
739: if (mp->logical_size != size) {
740: jfs_error(inode->i_sb,
741: "get_mp->logical_size != size\n");
742: jfs_err("logical_size = %d, size = %d",
743: mp->logical_size, size);
744: dump_stack();
745: goto unlock;
746: }
747: mp->count++;
748: lock_metapage(mp);
749: if (test_bit(META_discard, &mp->flag)) {
750: if (!new) {
751: jfs_error(inode->i_sb,
752: "using a discarded metapage\n");
753: discard_metapage(mp);
754: goto unlock;
755: }
756: clear_bit(META_discard, &mp->flag);
757: }
758: } else {
759: INCREMENT(mpStat.pagealloc);
760: mp = alloc_metapage(GFP_NOFS);
761: if (!mp)
762: goto unlock;
763: mp->folio = folio;
764: mp->sb = inode->i_sb;
765: mp->flag = 0;
766: mp->xflag = COMMIT_PAGE;
767: mp->count = 1;
768: mp->nohomeok = 0;
769: mp->logical_size = size;
770: mp->data = folio_address(folio) + page_offset;
771: mp->index = lblock;
772: if (unlikely(insert_metapage(folio, mp))) {
773: free_metapage(mp);
774: goto unlock;
775: }
776: lock_metapage(mp);
777: }
778:
779: if (new) {
780: jfs_info("zeroing mp = 0x%p", mp);
781: memset(mp->data, 0, PSIZE);
782: }
783:
784: folio_unlock(folio);
785: jfs_info("__get_metapage: returning = 0x%p data = 0x%p", mp, mp->data);
786: return mp;
787:
788: unlock:
789: folio_unlock(folio);
790: return NULL;
791: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 218/4 | 2026/03/16 16:23 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 7399 output: 21 thoughts: 363 Thoughts: **Analyzing JFS Metapage Retrieval** Okay, let's break down this `__get_metapage` function within the JFS context. I'm seeing that the code is retrieving a `folio` and associating it with a `metapage` structure (`mp`). The `folio`'s origin is either the standard `inode->i_mapping` or, crucially, `JFS_SBI(inode->i_sb)->direct_inode->i_mapping`, depending on whether the `absolute` flag is set. Essentially, the `absolute` flag is dictating which mapping we're using. Regardless of which mapping is selected, the superblock (`inode->i_sb`) is being assigned to `mp->sb`. This makes me think about how the associated host object of the mapping. It'll either be the main `inode` or the `direct_inode` associated with the specific JFS implementation. The core question that jumps out at me now is the relationship between block size information within the inode and the superblock. Specifically, I'm wondering if the block size bits, `inode->i_sb->s_blocksize_bits`, are consistently equal to `inode->i_blkbits`, and if this relationship holds true for the `direct_inode` as well. In other words, is `inode->i_sb->s_blocksize_bits` always the same as `JFS_SBI(inode->i_sb)->direct_inode->i_blkbits`? My experience suggests that, in most filesystem implementations, the `inode->i_blkbits` is initialized from the superblock's `s_blocksize_bits`. So, if this is the case here, then `inode->i_blkbits` should be the same as `inode->i_sb->s_blocksize_bits`, assuming all has been set correctly. To nail down whether JFS follows this expected pattern, I need to check the origin of `direct_inode` and where its `i_blkbits` gets its value. It's likely that the initialization of the `direct_inode` happens during the filesystem mount or superblock initialization, likely somewhere in `fs/jfs/jfs_mount.c` or `fs/jfs/super.c`. That's where I need to focus my investigation. |
| 219/4 | 2026/03/16 16:23 | tool | codesearch-find-references |
0mError:requested entity does not exist Args: map[Name:direct_inode] Results: map[References:<nil> TruncatedOutput:false] |
| 220/4 | 2026/03/16 16:23 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 8300 output: 47 thoughts: 36 Thoughts: Okay, here's my interpretation of that thought process, from an expert's perspective: **Pinpointing the Data Structure** Right, let's see... the problem points towards a field within the `jfs_sb_info` structure. That sounds about right, given the context. Now, to confirm and locate that specific field, I need to consult the source code. Let's head over to the `fs/jfs/jfs_incore.h` header file. That should have the definition for `jfs_sb_info`, including all its members. With a little luck, I'll find what I'm looking for quickly in there. This is likely where the relevant data or flag resides. Time to dig in! |
| 221/4 | 2026/03/16 16:23 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_incore.h IncludeLines:true Name:jfs_sb_info] Results: map[SourceCode: 156: /*
157: * JFS-private superblock information.
158: */
159: struct jfs_sb_info {
160: struct super_block *sb; /* Point back to vfs super block */
161: unsigned long mntflag; /* aggregate attributes */
162: struct inode *ipbmap; /* block map inode */
163: struct inode *ipaimap; /* aggregate inode map inode */
164: struct inode *ipaimap2; /* secondary aimap inode */
165: struct inode *ipimap; /* aggregate inode map inode */
166: struct jfs_log *log; /* log */
167: struct list_head log_list; /* volumes associated with a journal */
168: short bsize; /* logical block size */
169: short l2bsize; /* log2 logical block size */
170: short nbperpage; /* blocks per page */
171: short l2nbperpage; /* log2 blocks per page */
172: short l2niperblk; /* log2 inodes per page */
173: dev_t logdev; /* external log device */
174: uint aggregate; /* volume identifier in log record */
175: pxd_t logpxd; /* pxd describing log */
176: pxd_t fsckpxd; /* pxd describing fsck wkspc */
177: pxd_t ait2; /* pxd describing AIT copy */
178: uuid_t uuid; /* 128-bit uuid for volume */
179: uuid_t loguuid; /* 128-bit uuid for log */
180: /*
181: * commit_state is used for synchronization of the jfs_commit
182: * threads. It is protected by LAZY_LOCK().
183: */
184: int commit_state; /* commit state */
185: /* Formerly in ipimap */
186: uint gengen; /* inode generation generator*/
187: uint inostamp; /* shows inode belongs to fileset*/
188:
189: /* Formerly in ipbmap */
190: struct bmap *bmap; /* incore bmap descriptor */
191: struct nls_table *nls_tab; /* current codepage */
192: struct inode *direct_inode; /* metadata inode */
193: uint state; /* mount/recovery state */
194: unsigned long flag; /* mount time flags */
195: uint p_state; /* state prior to going no integrity */
196: kuid_t uid; /* uid to override on-disk uid */
197: kgid_t gid; /* gid to override on-disk gid */
198: uint umask; /* umask to override on-disk umask */
199: uint minblks_trim; /* minimum blocks, for online trim */
200: };
SourceFile:fs/jfs/jfs_incore.h] |
| 222/4 | 2026/03/16 16:23 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 9461 output: 21 thoughts: 40 Thoughts: Okay, here's my take on those thoughts, tailored for an expert audience: **Analyzing the `direct_inode` Initialization** Alright, yep, confirmed - `direct_inode` is present in the current codebase. Good. Now, the real question is how this thing gets set up. Where's the initialization code for `direct_inode` likely to reside? Given the context and my experience with these kinds of filesystems, my intuition says it's probably going to be in `fs/jfs/super.c`. That's the typical spot for superblock-related structures, and in-memory inode management, so it's the natural place to start looking. Let's head there and see what we can find. |
| 223/4 | 2026/03/16 16:23 | tool | codesearch-find-references |
0mError:requested entity does not exist Args: map[Name:direct_inode] Results: map[References:<nil> TruncatedOutput:false] |
| 224/4 | 2026/03/16 16:23 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 9698 output: 28 thoughts: 32 Thoughts: Okay, here's my interpretation of those thoughts, presented as a first-person expert summary: **Debugging a Failed Code Search** Alright, that's odd. `codesearch-find-references` just choked. I was hoping to quickly pinpoint where `JFS_SBI(sb)->direct_inode` is used, but no luck. This tells me I need to go back a step. I need to figure out *why* that search failed. It could be anything from a malformed query to some indexing issue on their end, or a search issue. This means the immediate line of attack, to find this variable, is not working. Perhaps the system I used had some issues. Time to consider the next option. Instead of figuring out why, I will just continue and look for that exact variable string and see what happens. |
| 225/4 | 2026/03/16 16:23 | tool | codesearch-find-references |
0mArgs:map[Name:jfs_sb_info::direct_inode] Results: map[References:[map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__get_metapage SourceFile:fs/jfs/jfs_metapage.c SourceLine:709] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__invalidate_metapages SourceFile:fs/jfs/jfs_metapage.c SourceLine:905] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_extendfs SourceFile:fs/jfs/resize.c SourceLine:201] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:jfs_fill_super SourceFile:fs/jfs/super.c SourceLine:521] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_fill_super SourceFile:fs/jfs/super.c SourceLine:571] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_fill_super SourceFile:fs/jfs/super.c SourceLine:572] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_fill_super SourceFile:fs/jfs/super.c SourceLine:573] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_fill_super SourceFile:fs/jfs/super.c SourceLine:574] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:jfs_fill_super SourceFile:fs/jfs/super.c SourceLine:575] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_put_super SourceFile:fs/jfs/super.c SourceLine:200] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_put_super SourceFile:fs/jfs/super.c SourceLine:201] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_reconfigure SourceFile:fs/jfs/super.c SourceLine:406] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_umount SourceFile:fs/jfs/jfs_umount.c SourceLine:96] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_umount_rw SourceFile:fs/jfs/jfs_umount.c SourceLine:148] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:write_special_inodes SourceFile:fs/jfs/jfs_logmgr.c SourceLine:209]] TruncatedOutput:false] |
| 226/4 | 2026/03/16 16:23 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 10735 output: 43 thoughts: 24 Thoughts: Here's a summary of my thought process: **Diving into the JFS Superblock Initialization** Okay, time to dissect the `jfs_fill_super` function. I'm heading straight to `fs/jfs/super.c` to get a handle on the JFS superblock initialization process. This is the bedrock of understanding how JFS mounts and sets itself up. I'm expecting to find the key steps for reading the superblock from disk, validating its contents, and setting up the in-memory data structures that represent the filesystem. I want to see how it handles things like journal setup, extent allocation, and any superblock-specific magic numbers or feature flags. I'll be looking specifically at how it deals with potential inconsistencies or errors during the mounting process. My aim is to understand the robustness of JFS's initialization and identify any points that might be performance bottlenecks or potential failure points. This knowledge will be invaluable for anything related to debugging, performance tuning, or even potential filesystem recovery efforts down the line. |
| 227/4 | 2026/03/16 16:23 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/super.c IncludeLines:true Name:jfs_fill_super] Results: map[SourceCode: 441: static int jfs_fill_super(struct super_block *sb, struct fs_context *fc)
442: {
443: struct jfs_context *ctx = fc->fs_private;
444: int silent = fc->sb_flags & SB_SILENT;
445: struct jfs_sb_info *sbi;
446: struct inode *inode;
447: int rc;
448: int ret = -EINVAL;
449:
450: jfs_info("In jfs_read_super: s_flags=0x%lx", sb->s_flags);
451:
452: sbi = kzalloc_obj(struct jfs_sb_info);
453: if (!sbi)
454: return -ENOMEM;
455:
456: sb->s_fs_info = sbi;
457: sb->s_max_links = JFS_LINK_MAX;
458: sb->s_time_min = 0;
459: sb->s_time_max = U32_MAX;
460: sbi->sb = sb;
461:
462: /* Transfer results of parsing to the sbi */
463: sbi->flag = ctx->flag;
464: sbi->uid = ctx->uid;
465: sbi->gid = ctx->gid;
466: sbi->umask = ctx->umask;
467: if (ctx->nls_map != (void *) -1) {
468: unload_nls(sbi->nls_tab);
469: sbi->nls_tab = ctx->nls_map;
470: }
471: ctx->nls_map = NULL;
472:
473: if (sbi->flag & JFS_DISCARD) {
474: if (!bdev_max_discard_sectors(sb->s_bdev)) {
475: pr_err("JFS: discard option not supported on device\n");
476: sbi->flag &= ~JFS_DISCARD;
477: } else {
478: sbi->minblks_trim = ctx->minblks_trim;
479: }
480: }
481:
482: #ifdef CONFIG_JFS_POSIX_ACL
483: sb->s_flags |= SB_POSIXACL;
484: #endif
485:
486: if (ctx->resize) {
487: pr_err("resize option for remount only\n");
488: goto out_unload;
489: }
490:
491: /*
492: * Initialize blocksize to 4K.
493: */
494: sb_set_blocksize(sb, PSIZE);
495:
496: /*
497: * Set method vectors.
498: */
499: sb->s_op = &jfs_super_operations;
500: sb->s_export_op = &jfs_export_operations;
501: sb->s_xattr = jfs_xattr_handlers;
502: #ifdef CONFIG_QUOTA
503: sb->dq_op = &dquot_operations;
504: sb->s_qcop = &jfs_quotactl_ops;
505: sb->s_quota_types = QTYPE_MASK_USR | QTYPE_MASK_GRP;
506: #endif
507:
508: /*
509: * Initialize direct-mapping inode/address-space
510: */
511: inode = new_inode(sb);
512: if (inode == NULL) {
513: ret = -ENOMEM;
514: goto out_unload;
515: }
516: inode->i_size = bdev_nr_bytes(sb->s_bdev);
517: inode->i_mapping->a_ops = &jfs_metapage_aops;
518: inode_fake_hash(inode);
519: mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS);
520:
521: sbi->direct_inode = inode;
522:
523: rc = jfs_mount(sb);
524: if (rc) {
525: if (!silent)
526: jfs_err("jfs_mount failed w/return code = %d", rc);
527: goto out_mount_failed;
528: }
529: if (sb_rdonly(sb))
530: sbi->log = NULL;
531: else {
532: rc = jfs_mount_rw(sb, 0);
533: if (rc) {
534: if (!silent) {
535: jfs_err("jfs_mount_rw failed, return code = %d",
536: rc);
537: }
538: goto out_no_rw;
539: }
540: }
541:
542: sb->s_magic = JFS_SUPER_MAGIC;
543:
544: if (sbi->mntflag & JFS_OS2)
545: set_default_d_op(sb, &jfs_ci_dentry_operations);
546:
547: inode = jfs_iget(sb, ROOT_I);
548: if (IS_ERR(inode)) {
549: ret = PTR_ERR(inode);
550: goto out_no_rw;
551: }
552: sb->s_root = d_make_root(inode);
553: if (!sb->s_root)
554: goto out_no_root;
555:
556: /* logical blocks are represented by 40 bits in pxd_t, etc.
557: * and page cache is indexed by long
558: */
559: sb->s_maxbytes = min(((loff_t)sb->s_blocksize) << 40, MAX_LFS_FILESIZE);
560: sb->s_time_gran = 1;
561: return 0;
562:
563: out_no_root:
564: jfs_err("jfs_read_super: get root dentry failed");
565:
566: out_no_rw:
567: rc = jfs_umount(sb);
568: if (rc)
569: jfs_err("jfs_umount failed with return code %d", rc);
570: out_mount_failed:
571: filemap_write_and_wait(sbi->direct_inode->i_mapping);
572: truncate_inode_pages(sbi->direct_inode->i_mapping, 0);
573: make_bad_inode(sbi->direct_inode);
574: iput(sbi->direct_inode);
575: sbi->direct_inode = NULL;
576: out_unload:
577: unload_nls(sbi->nls_tab);
578: kfree(sbi);
579: return ret;
580: }
SourceFile:fs/jfs/super.c] |
| 228/4 | 2026/03/16 16:23 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 13181 output: 44 thoughts: 220 Thoughts: **Analyzing Block Size Initialization in JFS** Okay, so I'm looking at `jfs_fill_super` and I see this sequence: `sb_set_blocksize(sb, PSIZE)`, then `new_inode(sb)`, and then `sbi->direct_inode = inode`. `PSIZE` is obviously a key piece here, and I already know where it's defined, so that's where I need to look. Right, `sb_set_blocksize` sets both `sb->s_blocksize` and its corresponding bits. And the `new_inode` function uses `sb->s_blocksize_bits` to initialize the inode's block size, namely `inode->i_blkbits`. Seems straightforward enough. But wait, hold on a second. This `jfs_mount` function... it gets called *after* that `direct_inode` is created, doesn't it? And `jfs_mount`, depending on how it's written, might have the ability to modify the superblock. Meaning, potentially, that the block size could be changed *after* the inode gets initialized. That's a potential area for a bug, so I need to check how `jfs_mount` behaves. I'll need to go look at `fs/jfs/jfs_mount.c` to see if it modifies those values. Let's see... |
| 229/4 | 2026/03/16 16:23 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_mount.c IncludeLines:true Name:jfs_mount] Results: map[SourceCode: 57: /*
58: * NAME: jfs_mount(sb)
59: *
60: * FUNCTION: vfs_mount()
61: *
62: * PARAMETER: sb - super block
63: *
64: * RETURN: -EBUSY - device already mounted or open for write
65: * -EBUSY - cvrdvp already mounted;
66: * -EBUSY - mount table full
67: * -ENOTDIR- cvrdvp not directory on a device mount
68: * -ENXIO - device open failure
69: */
70: int jfs_mount(struct super_block *sb)
71: {
72: int rc = 0; /* Return code */
73: struct jfs_sb_info *sbi = JFS_SBI(sb);
74: struct inode *ipaimap = NULL;
75: struct inode *ipaimap2 = NULL;
76: struct inode *ipimap = NULL;
77: struct inode *ipbmap = NULL;
78:
79: /*
80: * read/validate superblock
81: * (initialize mount inode from the superblock)
82: */
83: if ((rc = chkSuper(sb))) {
84: goto out;
85: }
86:
87: ipaimap = diReadSpecial(sb, AGGREGATE_I, 0);
88: if (ipaimap == NULL) {
89: jfs_err("jfs_mount: Failed to read AGGREGATE_I");
90: rc = -EIO;
91: goto out;
92: }
93: sbi->ipaimap = ipaimap;
94:
95: jfs_info("jfs_mount: ipaimap:0x%p", ipaimap);
96:
97: /*
98: * initialize aggregate inode allocation map
99: */
100: if ((rc = diMount(ipaimap))) {
101: jfs_err("jfs_mount: diMount(ipaimap) failed w/rc = %d", rc);
102: goto err_ipaimap;
103: }
104:
105: /*
106: * open aggregate block allocation map
107: */
108: ipbmap = diReadSpecial(sb, BMAP_I, 0);
109: if (ipbmap == NULL) {
110: rc = -EIO;
111: goto err_umount_ipaimap;
112: }
113:
114: jfs_info("jfs_mount: ipbmap:0x%p", ipbmap);
115:
116: sbi->ipbmap = ipbmap;
117:
118: /*
119: * initialize aggregate block allocation map
120: */
121: if ((rc = dbMount(ipbmap))) {
122: jfs_err("jfs_mount: dbMount failed w/rc = %d", rc);
123: goto err_ipbmap;
124: }
125:
126: /*
127: * open the secondary aggregate inode allocation map
128: *
129: * This is a duplicate of the aggregate inode allocation map.
130: *
131: * hand craft a vfs in the same fashion as we did to read ipaimap.
132: * By adding INOSPEREXT (32) to the inode number, we are telling
133: * diReadSpecial that we are reading from the secondary aggregate
134: * inode table. This also creates a unique entry in the inode hash
135: * table.
136: */
137: if ((sbi->mntflag & JFS_BAD_SAIT) == 0) {
138: ipaimap2 = diReadSpecial(sb, AGGREGATE_I, 1);
139: if (!ipaimap2) {
140: jfs_err("jfs_mount: Failed to read AGGREGATE_I");
141: rc = -EIO;
142: goto err_umount_ipbmap;
143: }
144: sbi->ipaimap2 = ipaimap2;
145:
146: jfs_info("jfs_mount: ipaimap2:0x%p", ipaimap2);
147:
148: /*
149: * initialize secondary aggregate inode allocation map
150: */
151: if ((rc = diMount(ipaimap2))) {
152: jfs_err("jfs_mount: diMount(ipaimap2) failed, rc = %d",
153: rc);
154: goto err_ipaimap2;
155: }
156: } else
157: /* Secondary aggregate inode table is not valid */
158: sbi->ipaimap2 = NULL;
159:
160: /*
161: * mount (the only/single) fileset
162: */
163: /*
164: * open fileset inode allocation map (aka fileset inode)
165: */
166: ipimap = diReadSpecial(sb, FILESYSTEM_I, 0);
167: if (ipimap == NULL) {
168: jfs_err("jfs_mount: Failed to read FILESYSTEM_I");
169: /* open fileset secondary inode allocation map */
170: rc = -EIO;
171: goto err_umount_ipaimap2;
172: }
173: jfs_info("jfs_mount: ipimap:0x%p", ipimap);
174:
175: /* initialize fileset inode allocation map */
176: if ((rc = diMount(ipimap))) {
177: jfs_err("jfs_mount: diMount failed w/rc = %d", rc);
178: goto err_ipimap;
179: }
180:
181: /* map further access of per fileset inodes by the fileset inode */
182: sbi->ipimap = ipimap;
183:
184: return rc;
185:
186: /*
187: * unwind on error
188: */
189: err_ipimap:
190: /* close fileset inode allocation map inode */
191: diFreeSpecial(ipimap);
192: err_umount_ipaimap2:
193: /* close secondary aggregate inode allocation map */
194: if (ipaimap2)
195: diUnmount(ipaimap2, 1);
196: err_ipaimap2:
197: /* close aggregate inodes */
198: if (ipaimap2)
199: diFreeSpecial(ipaimap2);
200: err_umount_ipbmap: /* close aggregate block allocation map */
201: dbUnmount(ipbmap, 1);
202: err_ipbmap: /* close aggregate inodes */
203: diFreeSpecial(ipbmap);
204: err_umount_ipaimap: /* close aggregate inode allocation map */
205: diUnmount(ipaimap, 1);
206: err_ipaimap: /* close aggregate inodes */
207: diFreeSpecial(ipaimap);
208: out:
209: if (rc)
210: jfs_err("Mount JFS Failure: %d", rc);
211:
212: return rc;
213: }
SourceFile:fs/jfs/jfs_mount.c] |
| 230/4 | 2026/03/16 16:23 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 16042 output: 42 thoughts: 22 Thoughts: Here's a summary of my thought process: **Diving into the JFS Filesystem Structure** Okay, so I'm looking at `jfs_mount`, and it's calling `chkSuper(sb)`. Right, that's the obvious next step. We're mounting a JFS filesystem, so we need to validate the superblock. This means I need to dig into what `chkSuper` actually *does*. I assume it's going to check the basics: magic numbers, filesystem type, maybe some crucial fields related to inode and block size and the size of the filesystem. It’s the gatekeeper, making sure this is a JFS filesystem we're even dealing with, and that it looks like a valid one. I'm already anticipating what to look for and where to begin digging. The super block validation is absolutely critical for the reliability of the system, so there's no way it's skipping any checks. Time to see what's actually under the hood. |
| 231/4 | 2026/03/16 16:23 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_mount.c IncludeLines:true Name:chkSuper] Results: map[SourceCode: 277: /*
278: * chkSuper()
279: *
280: * validate the superblock of the file system to be mounted and
281: * get the file system parameters.
282: *
283: * returns
284: * 0 with fragsize set if check successful
285: * error code if not successful
286: */
287: static int chkSuper(struct super_block *sb)
288: {
289: int rc = 0;
290: struct jfs_sb_info *sbi = JFS_SBI(sb);
291: struct jfs_superblock *j_sb;
292: struct buffer_head *bh;
293: int AIM_bytesize, AIT_bytesize;
294: int expected_AIM_bytesize, expected_AIT_bytesize;
295: s64 AIM_byte_addr, AIT_byte_addr, fsckwsp_addr;
296: s64 byte_addr_diff0, byte_addr_diff1;
297: s32 bsize;
298:
299: if ((rc = readSuper(sb, &bh)))
300: return rc;
301: j_sb = (struct jfs_superblock *)bh->b_data;
302:
303: /*
304: * validate superblock
305: */
306: /* validate fs signature */
307: if (strncmp(j_sb->s_magic, JFS_MAGIC, 4) ||
308: le32_to_cpu(j_sb->s_version) > JFS_VERSION) {
309: rc = -EINVAL;
310: goto out;
311: }
312:
313: bsize = le32_to_cpu(j_sb->s_bsize);
314: if (bsize != PSIZE) {
315: jfs_err("Only 4K block size supported!");
316: rc = -EINVAL;
317: goto out;
318: }
319:
320: jfs_info("superblock: flag:0x%08x state:0x%08x size:0x%Lx",
321: le32_to_cpu(j_sb->s_flag), le32_to_cpu(j_sb->s_state),
322: (unsigned long long) le64_to_cpu(j_sb->s_size));
323:
324: /* validate the descriptors for Secondary AIM and AIT */
325: if ((j_sb->s_flag & cpu_to_le32(JFS_BAD_SAIT)) !=
326: cpu_to_le32(JFS_BAD_SAIT)) {
327: expected_AIM_bytesize = 2 * PSIZE;
328: AIM_bytesize = lengthPXD(&j_sb->s_aim2) * bsize;
329: expected_AIT_bytesize = 4 * PSIZE;
330: AIT_bytesize = lengthPXD(&j_sb->s_ait2) * bsize;
331: AIM_byte_addr = addressPXD(&j_sb->s_aim2) * bsize;
332: AIT_byte_addr = addressPXD(&j_sb->s_ait2) * bsize;
333: byte_addr_diff0 = AIT_byte_addr - AIM_byte_addr;
334: fsckwsp_addr = addressPXD(&j_sb->s_fsckpxd) * bsize;
335: byte_addr_diff1 = fsckwsp_addr - AIT_byte_addr;
336: if ((AIM_bytesize != expected_AIM_bytesize) ||
337: (AIT_bytesize != expected_AIT_bytesize) ||
338: (byte_addr_diff0 != AIM_bytesize) ||
339: (byte_addr_diff1 <= AIT_bytesize))
340: j_sb->s_flag |= cpu_to_le32(JFS_BAD_SAIT);
341: }
342:
343: if ((j_sb->s_flag & cpu_to_le32(JFS_GROUPCOMMIT)) !=
344: cpu_to_le32(JFS_GROUPCOMMIT))
345: j_sb->s_flag |= cpu_to_le32(JFS_GROUPCOMMIT);
346:
347: /* validate fs state */
348: if (j_sb->s_state != cpu_to_le32(FM_CLEAN) &&
349: !sb_rdonly(sb)) {
350: jfs_err("jfs_mount: Mount Failure: File System Dirty.");
351: rc = -EINVAL;
352: goto out;
353: }
354:
355: sbi->state = le32_to_cpu(j_sb->s_state);
356: sbi->mntflag = le32_to_cpu(j_sb->s_flag);
357:
358: /*
359: * JFS always does I/O by 4K pages. Don't tell the buffer cache
360: * that we use anything else (leave s_blocksize alone).
361: */
362: sbi->bsize = bsize;
363: sbi->l2bsize = le16_to_cpu(j_sb->s_l2bsize);
364:
365: /* check some fields for possible corruption */
366: if (sbi->l2bsize != ilog2((u32)bsize) ||
367: j_sb->pad != 0 ||
368: le32_to_cpu(j_sb->s_state) > FM_STATE_MAX) {
369: rc = -EINVAL;
370: jfs_err("jfs_mount: Mount Failure: superblock is corrupt!");
371: goto out;
372: }
373:
374: /*
375: * For now, ignore s_pbsize, l2bfactor. All I/O going through buffer
376: * cache.
377: */
378: sbi->nbperpage = PSIZE >> sbi->l2bsize;
379: sbi->l2nbperpage = L2PSIZE - sbi->l2bsize;
380: sbi->l2niperblk = sbi->l2bsize - L2DISIZE;
381: if (sbi->mntflag & JFS_INLINELOG)
382: sbi->logpxd = j_sb->s_logpxd;
383: else {
384: sbi->logdev = new_decode_dev(le32_to_cpu(j_sb->s_logdev));
385: uuid_copy(&sbi->uuid, &j_sb->s_uuid);
386: uuid_copy(&sbi->loguuid, &j_sb->s_loguuid);
387: }
388: sbi->fsckpxd = j_sb->s_fsckpxd;
389: sbi->ait2 = j_sb->s_ait2;
390:
391: out:
392: brelse(bh);
393: return rc;
394: }
SourceFile:fs/jfs/jfs_mount.c] |
| 232/4 | 2026/03/16 16:23 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 18492 output: 41 thoughts: 480 Thoughts: **Analyzing JFS Block Size and Inode Initialization** Okay, let's break this down. In the `chkSuper` function, the code explicitly enforces a 4KB block size for the JFS filesystem. Line 313 reads the block size, and then lines 314-317 confirm it matches `PSIZE` which, as we know, is defined as 4096. Any other block size is considered an error, returning `-EINVAL`. This is expected. After that (line 362), `sbi->bsize` gets set to `bsize` (4096), and `sbi->l2bsize` is populated from `j_sb->s_l2bsize`. Further down, on line 366, there is a check ensuring `sbi->l2bsize` is consistent with `ilog2(bsize)`. Given that `bsize` is forced to be 4096, `sbi->l2bsize` *must* be 12 because `ilog2(4096)` is, naturally, 12. Seems sound. Moving to `jfs_fill_super`, line 494 calls `sb_set_blocksize(sb, PSIZE)`. This explicitly sets the superblock's `sb->s_blocksize_bits` to 12. So, `sb->s_blocksize_bits` is definitely 12, and `sbi->l2bsize`, as we previously ascertained, is also 12. So far, so good; consistency is king. Now, the question becomes: how is the block size propagated to individual inodes? When we create a new inode, `new_inode(sb)` is called. Inside that, `inode->i_blkbits` is initialized from `sb->s_blocksize_bits`. So, the block size information is carried by the super block, as we would expect. This directly implies that `inode->i_blkbits` gets set to 12. But what about existing inodes fetched through other means? We need to verify that all of the inodes within the JFS filesystem have the same block size as we have discovered. `jfs_iget` is the key function used for retrieving existing inodes. Therefore, the next logical step is to examine `jfs_iget` in `fs/jfs/inode.c` to see how the block size is handled there to ensure complete consistency. |
| 233/4 | 2026/03/16 16:24 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/inode.c IncludeLines:true Name:jfs_iget] Results: map[SourceCode: 24: struct inode *jfs_iget(struct super_block *sb, unsigned long ino)
25: {
26: struct inode *inode;
27: int ret;
28:
29: inode = iget_locked(sb, ino);
30: if (!inode)
31: return ERR_PTR(-ENOMEM);
32: if (!(inode_state_read_once(inode) & I_NEW))
33: return inode;
34:
35: ret = diRead(inode);
36: if (ret < 0) {
37: iget_failed(inode);
38: return ERR_PTR(ret);
39: }
40:
41: if (S_ISREG(inode->i_mode)) {
42: inode->i_op = &jfs_file_inode_operations;
43: inode->i_fop = &jfs_file_operations;
44: inode->i_mapping->a_ops = &jfs_aops;
45: } else if (S_ISDIR(inode->i_mode)) {
46: inode->i_op = &jfs_dir_inode_operations;
47: inode->i_fop = &jfs_dir_operations;
48: } else if (S_ISLNK(inode->i_mode)) {
49: if (inode->i_size >= IDATASIZE) {
50: inode->i_op = &page_symlink_inode_operations;
51: inode_nohighmem(inode);
52: inode->i_mapping->a_ops = &jfs_aops;
53: } else {
54: inode->i_op = &jfs_fast_symlink_inode_operations;
55: inode->i_link = JFS_IP(inode)->i_inline;
56: /*
57: * The inline data should be null-terminated, but
58: * don't let on-disk corruption crash the kernel
59: */
60: inode->i_link[inode->i_size] = '\0';
61: }
62: } else if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) ||
63: S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) {
64: inode->i_op = &jfs_file_inode_operations;
65: init_special_inode(inode, inode->i_mode, inode->i_rdev);
66: } else {
67: printk(KERN_DEBUG "JFS: Invalid file type 0%04o for inode %lu.\n",
68: inode->i_mode, inode->i_ino);
69: iget_failed(inode);
70: return ERR_PTR(-EIO);
71: }
72: unlock_new_inode(inode);
73: return inode;
74: }
SourceFile:fs/jfs/inode.c] |
| 234/4 | 2026/03/16 16:24 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 20409 output: 42 thoughts: 69 Thoughts: Here's my summary, as though I'm thinking through this process: **Diving into Inode Initialization** Okay, so `iget_locked` is the entry point, and that leads us to `alloc_inode`. Right, `alloc_inode` then calls `inode_init_always`. Got it. Now, *that's* where we're seeing the core inode initialization happening. Specifically, `inode_init_always` is responsible for setting `inode->i_blkbits` to `sb->s_blocksize_bits`. So, the block size bits are being pulled directly from the superblock and assigned to the inode. That makes perfect sense for ensuring proper block addressing within the file system. Let's take a look at how that is handled in the JFS filesystem, in particular the `diRead` function in `fs/jfs/jfs_imap.c`. |
| 235/4 | 2026/03/16 16:24 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_imap.c IncludeLines:true Name:diRead] Results: map[SourceCode: 257: /*
258: * NAME: diRead()
259: *
260: * FUNCTION: initialize an incore inode from disk.
261: *
262: * on entry, the specifed incore inode should itself
263: * specify the disk inode number corresponding to the
264: * incore inode (i.e. i_number should be initialized).
265: *
266: * this routine handles incore inode initialization for
267: * both "special" and "regular" inodes. special inodes
268: * are those required early in the mount process and
269: * require special handling since much of the file system
270: * is not yet initialized. these "special" inodes are
271: * identified by a NULL inode map inode pointer and are
272: * actually initialized by a call to diReadSpecial().
273: *
274: * for regular inodes, the iag describing the disk inode
275: * is read from disk to determine the inode extent address
276: * for the disk inode. with the inode extent address in
277: * hand, the page of the extent that contains the disk
278: * inode is read and the disk inode is copied to the
279: * incore inode.
280: *
281: * PARAMETERS:
282: * ip - pointer to incore inode to be initialized from disk.
283: *
284: * RETURN VALUES:
285: * 0 - success
286: * -EIO - i/o error.
287: * -ENOMEM - insufficient memory
288: *
289: */
290: int diRead(struct inode *ip)
291: {
292: struct jfs_sb_info *sbi = JFS_SBI(ip->i_sb);
293: int iagno, ino, extno, rc, agno;
294: struct inode *ipimap;
295: struct dinode *dp;
296: struct iag *iagp;
297: struct metapage *mp;
298: s64 blkno, agstart;
299: struct inomap *imap;
300: int block_offset;
301: int inodes_left;
302: unsigned long pageno;
303: int rel_inode;
304:
305: jfs_info("diRead: ino = %ld", ip->i_ino);
306:
307: ipimap = sbi->ipimap;
308: JFS_IP(ip)->ipimap = ipimap;
309:
310: /* determine the iag number for this inode (number) */
311: iagno = INOTOIAG(ip->i_ino);
312:
313: /* read the iag */
314: IREAD_LOCK(ipimap, RDWRLOCK_IMAP);
315: imap = JFS_IP(ipimap)->i_imap;
316: rc = diIAGRead(imap, iagno, &mp);
317: IREAD_UNLOCK(ipimap);
318: if (rc) {
319: jfs_err("diRead: diIAGRead returned %d", rc);
320: return (rc);
321: }
322:
323: iagp = (struct iag *) mp->data;
324:
325: /* determine inode extent that holds the disk inode */
326: ino = ip->i_ino & (INOSPERIAG - 1);
327: extno = ino >> L2INOSPEREXT;
328:
329: if ((lengthPXD(&iagp->inoext[extno]) != imap->im_nbperiext) ||
330: (addressPXD(&iagp->inoext[extno]) == 0)) {
331: release_metapage(mp);
332: return -ESTALE;
333: }
334:
335: /* get disk block number of the page within the inode extent
336: * that holds the disk inode.
337: */
338: blkno = INOPBLK(&iagp->inoext[extno], ino, sbi->l2nbperpage);
339:
340: /* get the ag for the iag */
341: agstart = le64_to_cpu(iagp->agstart);
342: agno = BLKTOAG(agstart, JFS_SBI(ip->i_sb));
343:
344: release_metapage(mp);
345: if (agno >= MAXAG || agno < 0)
346: return -EIO;
347:
348: rel_inode = (ino & (INOSPERPAGE - 1));
349: pageno = blkno >> sbi->l2nbperpage;
350:
351: if ((block_offset = ((u32) blkno & (sbi->nbperpage - 1)))) {
352: /*
353: * OS/2 didn't always align inode extents on page boundaries
354: */
355: inodes_left =
356: (sbi->nbperpage - block_offset) << sbi->l2niperblk;
357:
358: if (rel_inode < inodes_left)
359: rel_inode += block_offset << sbi->l2niperblk;
360: else {
361: pageno += 1;
362: rel_inode -= inodes_left;
363: }
364: }
365:
366: /* read the page of disk inode */
367: mp = read_metapage(ipimap, pageno << sbi->l2nbperpage, PSIZE, 1);
368: if (!mp) {
369: jfs_err("diRead: read_metapage failed");
370: return -EIO;
371: }
372:
373: /* locate the disk inode requested */
374: dp = (struct dinode *) mp->data;
375: dp += rel_inode;
376:
377: if (ip->i_ino != le32_to_cpu(dp->di_number)) {
378: jfs_error(ip->i_sb, "i_ino != di_number\n");
379: rc = -EIO;
380: } else if (le32_to_cpu(dp->di_nlink) == 0)
381: rc = -ESTALE;
382: else
383: /* copy the disk inode to the in-memory inode */
384: rc = copy_from_dinode(dp, ip);
385:
386: release_metapage(mp);
387:
388: /* set the ag for the inode */
389: JFS_IP(ip)->agstart = agstart;
390: JFS_IP(ip)->active_ag = -1;
391:
392: return (rc);
393: }
SourceFile:fs/jfs/jfs_imap.c] |
| 236/4 | 2026/03/16 16:24 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 22871 output: 46 thoughts: 19 Thoughts: Okay, let's break this down. **Diving into the `diRead` Function** Right, so `diRead` is the function we're focusing on. The text indicates that `diRead` calls `copy_from_dinode`. That's the key relationship we need to understand. I'm going to start by confirming this interaction. The flow between these two functions is crucial for the overall data access process, so I need to analyze how data is actually moved. I'll need to figure out *why* this call is happening here. Are there alternatives? Are there other ways to potentially optimize the data copy? I need to understand the implications of this particular function call within the context of the larger I/O system. |
| 237/4 | 2026/03/16 16:24 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_imap.c IncludeLines:true Name:copy_from_dinode] Results: map[SourceCode:3025: /*
3026: * NAME: copy_from_dinode()
3027: *
3028: * FUNCTION: Copies inode info from disk inode to in-memory inode
3029: *
3030: * RETURN VALUES:
3031: * 0 - success
3032: * -EINVAL - unexpected inode type
3033: */
3034: static int copy_from_dinode(struct dinode * dip, struct inode *ip)
3035: {
3036: struct jfs_inode_info *jfs_ip = JFS_IP(ip);
3037: struct jfs_sb_info *sbi = JFS_SBI(ip->i_sb);
3038: int fileset = le32_to_cpu(dip->di_fileset);
3039:
3040: switch (fileset) {
3041: case AGGR_RESERVED_I: case AGGREGATE_I: case BMAP_I:
3042: case LOG_I: case BADBLOCK_I: case FILESYSTEM_I:
3043: break;
3044: default:
3045: return -EINVAL;
3046: }
3047:
3048: jfs_ip->fileset = fileset;
3049: jfs_ip->mode2 = le32_to_cpu(dip->di_mode);
3050: jfs_set_inode_flags(ip);
3051:
3052: ip->i_mode = le32_to_cpu(dip->di_mode) & 0xffff;
3053: if (sbi->umask != -1) {
3054: ip->i_mode = (ip->i_mode & ~0777) | (0777 & ~sbi->umask);
3055: /* For directories, add x permission if r is allowed by umask */
3056: if (S_ISDIR(ip->i_mode)) {
3057: if (ip->i_mode & 0400)
3058: ip->i_mode |= 0100;
3059: if (ip->i_mode & 0040)
3060: ip->i_mode |= 0010;
3061: if (ip->i_mode & 0004)
3062: ip->i_mode |= 0001;
3063: }
3064: }
3065: set_nlink(ip, le32_to_cpu(dip->di_nlink));
3066:
3067: jfs_ip->saved_uid = make_kuid(&init_user_ns, le32_to_cpu(dip->di_uid));
3068: if (!uid_valid(sbi->uid))
3069: ip->i_uid = jfs_ip->saved_uid;
3070: else {
3071: ip->i_uid = sbi->uid;
3072: }
3073:
3074: jfs_ip->saved_gid = make_kgid(&init_user_ns, le32_to_cpu(dip->di_gid));
3075: if (!gid_valid(sbi->gid))
3076: ip->i_gid = jfs_ip->saved_gid;
3077: else {
3078: ip->i_gid = sbi->gid;
3079: }
3080:
3081: ip->i_size = le64_to_cpu(dip->di_size);
3082: inode_set_atime(ip, le32_to_cpu(dip->di_atime.tv_sec),
3083: le32_to_cpu(dip->di_atime.tv_nsec));
3084: inode_set_mtime(ip, le32_to_cpu(dip->di_mtime.tv_sec),
3085: le32_to_cpu(dip->di_mtime.tv_nsec));
3086: inode_set_ctime(ip, le32_to_cpu(dip->di_ctime.tv_sec),
3087: le32_to_cpu(dip->di_ctime.tv_nsec));
3088: ip->i_blocks = LBLK2PBLK(ip->i_sb, le64_to_cpu(dip->di_nblocks));
3089: ip->i_generation = le32_to_cpu(dip->di_gen);
3090:
3091: jfs_ip->ixpxd = dip->di_ixpxd; /* in-memory pxd's are little-endian */
3092: jfs_ip->acl = dip->di_acl; /* as are dxd's */
3093: jfs_ip->ea = dip->di_ea;
3094: jfs_ip->next_index = le32_to_cpu(dip->di_next_index);
3095: jfs_ip->otime = le32_to_cpu(dip->di_otime.tv_sec);
3096: jfs_ip->acltype = le32_to_cpu(dip->di_acltype);
3097:
3098: if (S_ISCHR(ip->i_mode) || S_ISBLK(ip->i_mode)) {
3099: jfs_ip->dev = le32_to_cpu(dip->di_rdev);
3100: ip->i_rdev = new_decode_dev(jfs_ip->dev);
3101: }
3102:
3103: if (S_ISDIR(ip->i_mode)) {
3104: memcpy(&jfs_ip->u.dir, &dip->u._dir, 384);
3105: } else if (S_ISREG(ip->i_mode) || S_ISLNK(ip->i_mode)) {
3106: memcpy(&jfs_ip->i_xtroot, &dip->di_xtroot, 288);
3107: } else
3108: memcpy(&jfs_ip->i_inline_ea, &dip->di_inlineea, 128);
3109:
3110: /* Zero the in-memory-only stuff */
3111: jfs_ip->cflag = 0;
3112: jfs_ip->btindex = 0;
3113: jfs_ip->btorder = 0;
3114: jfs_ip->bxflag = 0;
3115: jfs_ip->blid = 0;
3116: jfs_ip->atlhead = 0;
3117: jfs_ip->atltail = 0;
3118: jfs_ip->xtlid = 0;
3119: return (0);
3120: }
SourceFile:fs/jfs/jfs_imap.c] |
| 238/4 | 2026/03/16 16:24 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 25026 output: 24 thoughts: 238 Thoughts: **Analyzing JFS Block Size and Inode Initialization**
Okay, let me think this through. It looks like the `copy_from_dinode` function in JFS doesn't seem to touch the `i_blkbits` field within an inode. If that's the case, then `i_blkbits` would retain the value it was initialized to, which, as I recall from `inode_init_always`, is set to `sb->s_blocksize_bits`.
Now, a crucial question arises: does JFS ever call `sb_set_blocksize` with anything other than the standard `PSIZE` value, which is 4096 bytes? I seem to recall some checks. Ah yes, in the `chkSuper` function. It explicitly checks for this condition: the blocksize must be `PSIZE`. The code actually errors out if it finds something other than that:
```c
314: if (bsize != PSIZE) {
315: jfs_err("Only 4K block size supported!");
316: rc = -EINVAL;
317: goto out;
318: }
```
Right. So that means that `sb->s_blocksize` is consistently set to 4096 bytes, and consequently, `sb->s_blocksize_bits` is always 12 (since 2^12 = 4096).
Alright, let's step back a bit and make sure. Let me see where else in the JFS code the `i_blkbits` field is even touched. I'll need to do a code search for all assignments to `i_blkbits` within the `fs/jfs/` directory. That'll confirm if there's any other place where the block size might be redefined.
|
| 239/4 | 2026/03/16 16:24 | tool | codesearch-find-references |
0mArgs:map[Name:inode::i_blkbits] Results: map[References:[map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__blockdev_direct_IO SourceFile:fs/direct-io.c SourceLine:1110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__blockdev_direct_IO SourceFile:fs/direct-io.c SourceLine:1110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__blockdev_direct_IO SourceFile:fs/direct-io.c SourceLine:1110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__blockdev_direct_IO SourceFile:fs/direct-io.c SourceLine:1110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__blockdev_direct_IO SourceFile:fs/direct-io.c SourceLine:1110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__blockdev_direct_IO SourceFile:fs/direct-io.c SourceLine:1110] map[ReferenceKind:takes-address-of ReferencingEntityKind:function ReferencingEntityName:__blockdev_direct_IO SourceFile:fs/direct-io.c SourceLine:1110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__ext4_block_zero_page_range SourceFile:fs/ext4/inode.c SourceLine:4023] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__ext4_find_entry SourceFile:fs/ext4/namei.c SourceLine:1633] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__ext4_read_dirblock SourceFile:fs/ext4/namei.c SourceLine:134] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__find_get_block_slow SourceFile:fs/buffer.c SourceLine:184] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__get_metapage SourceFile:fs/jfs/jfs_metapage.c SourceLine:698] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__get_metapage SourceFile:fs/jfs/jfs_metapage.c SourceLine:716] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_fallocate SourceFile:fs/gfs2/file.c SourceLine:1335] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_alloc SourceFile:fs/gfs2/bmap.c SourceLine:669] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_alloc SourceFile:fs/gfs2/bmap.c SourceLine:765] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_alloc SourceFile:fs/gfs2/bmap.c SourceLine:774] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_get SourceFile:fs/gfs2/bmap.c SourceLine:886] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_get SourceFile:fs/gfs2/bmap.c SourceLine:894] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_get SourceFile:fs/gfs2/bmap.c SourceLine:895] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_get SourceFile:fs/gfs2/bmap.c SourceLine:896] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_get SourceFile:fs/gfs2/bmap.c SourceLine:898] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_get SourceFile:fs/gfs2/bmap.c SourceLine:921] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_get SourceFile:fs/gfs2/bmap.c SourceLine:922] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_get SourceFile:fs/gfs2/bmap.c SourceLine:949] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__invalidate_metapages SourceFile:fs/jfs/jfs_metapage.c SourceLine:901] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:_ext4_get_block SourceFile:fs/ext4/inode.c SourceLine:907] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:bh_get SourceFile:fs/gfs2/quota.c SourceLine:415] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:bh_get SourceFile:fs/gfs2/quota.c SourceLine:423] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_size SourceFile:block/bdev.c SourceLine:1371] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_truncate_page SourceFile:fs/buffer.c SourceLine:2671] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:ceph_fill_inode SourceFile:fs/ceph/inode.c SourceLine:1107] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:ceph_fill_inode SourceFile:fs/ceph/inode.c SourceLine:1109] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:ceph_fill_inode SourceFile:fs/ceph/inode.c SourceLine:1112] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:ceph_fill_inode SourceFile:fs/ceph/inode.c SourceLine:1216] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:ceph_new_inode SourceFile:fs/ceph/inode.c SourceLine:127] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:cifs_alloc_inode SourceFile:fs/smb/client/cifsfs.c SourceLine:470] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:clean_bdev_aliases SourceFile:fs/buffer.c SourceLine:1738] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dax_writeback_mapping_range SourceFile:fs/dax.c SourceLine:1209] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:do_mpage_readpage SourceFile:fs/mpage.c SourceLine:155] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:do_perf_trace_gfs2_iomap_end SourceFile:fs/gfs2/trace_gfs2.h SourceLine:511] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:do_trace_event_raw_event_gfs2_iomap_end SourceFile:fs/gfs2/trace_gfs2.h SourceLine:511] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_alloc_block SourceFile:include/linux/quotaops.h SourceLine:336] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_alloc_block_nodirty SourceFile:include/linux/quotaops.h SourceLine:326] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_alloc_block_nofail SourceFile:include/linux/quotaops.h SourceLine:331] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_claim_block SourceFile:include/linux/quotaops.h SourceLine:362] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_free_block SourceFile:include/linux/quotaops.h SourceLine:390] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_free_block_nodirty SourceFile:include/linux/quotaops.h SourceLine:385] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_prealloc_block_nodirty SourceFile:include/linux/quotaops.h SourceLine:341] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_reclaim_block SourceFile:include/linux/quotaops.h SourceLine:368] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_release_reservation_block SourceFile:include/linux/quotaops.h SourceLine:396] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_reserve_block SourceFile:include/linux/quotaops.h SourceLine:356] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_find_target_block SourceFile:fs/erofs/namei.c SourceLine:93] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_get_aops SourceFile:fs/erofs/internal.h SourceLine:480] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_map_blocks SourceFile:fs/erofs/data.c SourceLine:110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_get_block SourceFile:fs/exfat/inode.c SourceLine:253] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_alloc_file_blocks SourceFile:fs/ext4/extents.c SourceLine:4557] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_alloc_file_blocks SourceFile:fs/ext4/extents.c SourceLine:4618] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_block_write_begin SourceFile:fs/ext4/inode.c SourceLine:1179] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_collapse_range SourceFile:fs/ext4/extents.c SourceLine:5539] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_collapse_range SourceFile:fs/ext4/extents.c SourceLine:5540] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_convert_meta_bg SourceFile:fs/ext4/resize.c SourceLine:1928] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_convert_unwritten_extents SourceFile:fs/ext4/extents.c SourceLine:5001] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_convert_unwritten_extents_atomic SourceFile:fs/ext4/extents.c SourceLine:4927] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_da_should_update_i_disksize SourceFile:fs/ext4/inode.c SourceLine:3182] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_do_fallocate SourceFile:fs/ext4/extents.c SourceLine:4778] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_do_fallocate SourceFile:fs/ext4/extents.c SourceLine:4779] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_do_fallocate SourceFile:fs/ext4/extents.c SourceLine:4779] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_do_fallocate SourceFile:fs/ext4/extents.c SourceLine:4779] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_do_fallocate SourceFile:fs/ext4/extents.c SourceLine:4779] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_dx_find_entry SourceFile:fs/ext4/namei.c SourceLine:1731] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_end_bio SourceFile:fs/ext4/page-io.c SourceLine:369] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_get_verity_descriptor_location SourceFile:fs/ext4/verity.c SourceLine:307] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_init_orphan_info SourceFile:fs/ext4/orphan.c SourceLine:593] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_inode_blocks SourceFile:fs/ext4/inode.c SourceLine:5004] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_inode_blocks_set SourceFile:fs/ext4/inode.c SourceLine:4642] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_insert_range SourceFile:fs/ext4/extents.c SourceLine:5642] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_insert_range SourceFile:fs/ext4/extents.c SourceLine:5643] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_iomap_alloc SourceFile:fs/ext4/inode.c SourceLine:3705] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_iomap_begin SourceFile:fs/ext4/inode.c SourceLine:3753] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_iomap_begin_report SourceFile:fs/ext4/inode.c SourceLine:3837] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_map_blocks SourceFile:fs/ext4/inode.c SourceLine:832] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_map_blocks SourceFile:fs/ext4/inode.c SourceLine:833] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_get_buddy_folio_lock SourceFile:fs/ext4/mballoc.c SourceLine:1556] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_get_buddy_folio_lock SourceFile:fs/ext4/mballoc.c SourceLine:1565] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_get_buddy_folio_lock SourceFile:fs/ext4/mballoc.c SourceLine:1568] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_init_cache SourceFile:fs/ext4/mballoc.c SourceLine:1409] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_init_cache SourceFile:fs/ext4/mballoc.c SourceLine:1448] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_load_buddy_gfp SourceFile:fs/ext4/mballoc.c SourceLine:1714] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_load_buddy_gfp SourceFile:fs/ext4/mballoc.c SourceLine:1747] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_load_buddy_gfp SourceFile:fs/ext4/mballoc.c SourceLine:1765] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_load_buddy_gfp SourceFile:fs/ext4/mballoc.c SourceLine:1768] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_load_buddy_gfp SourceFile:fs/ext4/mballoc.c SourceLine:1814] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mpage_readpages SourceFile:fs/ext4/readpage.c SourceLine:216] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mpage_readpages SourceFile:fs/ext4/readpage.c SourceLine:252] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mpage_readpages SourceFile:fs/ext4/readpage.c SourceLine:253] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_overwrite_io SourceFile:fs/ext4/file.c SourceLine:222] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_punch_hole SourceFile:fs/ext4/inode.c SourceLine:4397] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_punch_hole SourceFile:fs/ext4/inode.c SourceLine:4398] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_readdir SourceFile:fs/ext4/dir.c SourceLine:196] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_set_iomap SourceFile:fs/ext4/inode.c SourceLine:3449] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_set_iomap SourceFile:fs/ext4/inode.c SourceLine:3472] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_set_iomap SourceFile:fs/ext4/inode.c SourceLine:3473] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_xattr_inode_read SourceFile:fs/ext4/xattr.c SourceLine:385] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_xattr_inode_read SourceFile:fs/ext4/xattr.c SourceLine:386] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_xattr_inode_write SourceFile:fs/ext4/xattr.c SourceLine:1391] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_zero_range SourceFile:fs/ext4/extents.c SourceLine:4673] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_zero_range SourceFile:fs/ext4/extents.c SourceLine:4713] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fallocate_chunk SourceFile:fs/gfs2/file.c SourceLine:1206] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fallocate_chunk SourceFile:fs/gfs2/file.c SourceLine:1207] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fat_get_block SourceFile:fs/fat/inode.c SourceLine:187] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fat_get_block_bmap SourceFile:fs/fat/inode.c SourceLine:295] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:filemap_range_uptodate SourceFile:mm/filemap.c SourceLine:2528] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:folio_create_buffers SourceFile:fs/buffer.c SourceLine:1800] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:folio_create_buffers SourceFile:fs/buffer.c SourceLine:1800] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:folio_create_buffers SourceFile:fs/buffer.c SourceLine:1800] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:folio_create_buffers SourceFile:fs/buffer.c SourceLine:1800] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:folio_create_buffers SourceFile:fs/buffer.c SourceLine:1800] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:folio_create_buffers SourceFile:fs/buffer.c SourceLine:1800] map[ReferenceKind:takes-address-of ReferencingEntityKind:function ReferencingEntityName:folio_create_buffers SourceFile:fs/buffer.c SourceLine:1800] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_policy_du_bits SourceFile:fs/crypto/fscrypt_private.h SourceLine:218] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_policy_v2_du_bits SourceFile:fs/crypto/fscrypt_private.h SourceLine:209] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_prepare_new_inode SourceFile:fs/crypto/keysetup.c SourceLine:746] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_setup_encryption_info SourceFile:fs/crypto/keysetup.c SourceLine:613] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_supported_v2_policy SourceFile:fs/crypto/policy.c SourceLine:255] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_supported_v2_policy SourceFile:fs/crypto/policy.c SourceLine:262] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_zeroout_range SourceFile:fs/crypto/bio.c SourceLine:142] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_zeroout_range SourceFile:fs/crypto/bio.c SourceLine:143] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_zeroout_range SourceFile:fs/crypto/bio.c SourceLine:144] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_zeroout_range_inline_crypt SourceFile:fs/crypto/bio.c SourceLine:76] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:fsstack_copy_attr_all SourceFile:fs/stack.c SourceLine:72] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fsstack_copy_attr_all SourceFile:fs/stack.c SourceLine:72] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fsverity_init_merkle_tree_params SourceFile:fs/verity/open.c SourceLine:84] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fuse_fill_attr_from_inode SourceFile:fs/fuse/inode.c SourceLine:1698] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:generic_swapfile_activate SourceFile:mm/page_io.c SourceLine:94] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_add_inode_blocks SourceFile:fs/gfs2/inode.h SourceLine:57] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_alloc_extent SourceFile:fs/gfs2/bmap.c SourceLine:1290] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_block_map SourceFile:fs/gfs2/bmap.c SourceLine:1232] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_block_map SourceFile:fs/gfs2/bmap.c SourceLine:1254] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_check_internal_file_size SourceFile:fs/gfs2/inode.h SourceLine:81] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_get_extent SourceFile:fs/gfs2/bmap.c SourceLine:1269] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_get_inode_blocks SourceFile:fs/gfs2/inode.h SourceLine:52] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_hole_size SourceFile:fs/gfs2/bmap.c SourceLine:600] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_iomap_begin_write SourceFile:fs/gfs2/bmap.c SourceLine:1054] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_iomap_get_folio SourceFile:fs/gfs2/bmap.c SourceLine:976] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_map_journal_extents SourceFile:fs/gfs2/bmap.c SourceLine:2287] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_set_inode_blocks SourceFile:fs/gfs2/inode.h SourceLine:47] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_unstuffer_folio SourceFile:fs/gfs2/bmap.c SourceLine:75] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_write_alloc_required SourceFile:fs/gfs2/bmap.c SourceLine:2350] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_write_jdata_batch SourceFile:fs/gfs2/aops.c SourceLine:205] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hpfs_iomap_begin SourceFile:fs/hpfs/file.c SourceLine:126] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:htree_dirblock_to_tree SourceFile:fs/ext4/namei.c SourceLine:1077] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:i_blocks_per_folio SourceFile:include/linux/pagemap.h SourceLine:1592] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:i_blocksize SourceFile:include/linux/fs.h SourceLine:979] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_clear_range_dirty SourceFile:fs/iomap/buffered-io.c SourceLine:179] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_clear_range_dirty SourceFile:fs/iomap/buffered-io.c SourceLine:180] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_find_dirty_range SourceFile:fs/iomap/buffered-io.c SourceLine:142] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_find_dirty_range SourceFile:fs/iomap/buffered-io.c SourceLine:143] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_find_dirty_range SourceFile:fs/iomap/buffered-io.c SourceLine:143] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_find_dirty_range SourceFile:fs/iomap/buffered-io.c SourceLine:157] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_find_dirty_range SourceFile:fs/iomap/buffered-io.c SourceLine:158] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_set_range_dirty SourceFile:fs/iomap/buffered-io.c SourceLine:202] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_set_range_dirty SourceFile:fs/iomap/buffered-io.c SourceLine:203] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_set_range_uptodate SourceFile:fs/iomap/buffered-io.c SourceLine:70] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_set_range_uptodate SourceFile:fs/iomap/buffered-io.c SourceLine:71] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:inode_getblk SourceFile:fs/udf/inode.c SourceLine:842] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:inode_init_always_gfp SourceFile:fs/inode.c SourceLine:235] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_adjust_read_range SourceFile:fs/iomap/buffered-io.c SourceLine:293] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_bmap SourceFile:fs/iomap/fiemap.c SourceLine:99] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_bmap SourceFile:fs/iomap/fiemap.c SourceLine:103] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_dio_bio_iter_one SourceFile:fs/iomap/direct-io.c SourceLine:346] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_dio_zero SourceFile:fs/iomap/direct-io.c SourceLine:314] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_is_partially_uptodate SourceFile:fs/iomap/buffered-io.c SourceLine:694] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_is_partially_uptodate SourceFile:fs/iomap/buffered-io.c SourceLine:695] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_to_bh SourceFile:fs/buffer.c SourceLine:2043] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_to_bh SourceFile:fs/buffer.c SourceLine:2098] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_write_delalloc_ifs_punch SourceFile:fs/iomap/buffered-io.c SourceLine:1236] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_get_block SourceFile:fs/jfs/inode.c SourceLine:213] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_get_block SourceFile:fs/jfs/inode.c SourceLine:243] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_get_block SourceFile:fs/jfs/inode.c SourceLine:260] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:make_empty_dir_inode SourceFile:fs/libfs.c SourceLine:1781] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:map_buffer_to_folio SourceFile:fs/mpage.c SourceLine:110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_get_blocks SourceFile:fs/jfs/jfs_metapage.c SourceLine:330] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_read_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:577] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_read_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:594] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_read_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:597] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_read_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:598] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_write_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:444] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_write_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:468] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_write_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:475] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_write_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:492] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_write_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:506] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_write_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:512] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mext_check_adjust_range SourceFile:fs/ext4/move_extent.c SourceLine:524] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mext_check_adjust_range SourceFile:fs/ext4/move_extent.c SourceLine:525] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mext_check_adjust_range SourceFile:fs/ext4/move_extent.c SourceLine:542] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mext_check_adjust_range SourceFile:fs/ext4/move_extent.c SourceLine:543] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mext_folio_mkuptodate SourceFile:fs/ext4/move_extent.c SourceLine:137] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mext_folio_mkwrite SourceFile:fs/ext4/move_extent.c SourceLine:276] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mext_move_begin SourceFile:fs/ext4/move_extent.c SourceLine:202] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mext_move_extent SourceFile:fs/ext4/move_extent.c SourceLine:306] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:move_dirty_folio_in_page_array SourceFile:fs/ceph/addr.c SourceLine:1265] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_map_and_submit_buffers SourceFile:fs/ext4/inode.c SourceLine:2300] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_map_and_submit_buffers SourceFile:fs/ext4/inode.c SourceLine:2301] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_map_and_submit_buffers SourceFile:fs/ext4/inode.c SourceLine:2312] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_map_and_submit_extent SourceFile:fs/ext4/inode.c SourceLine:2465] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_prepare_extent_to_map SourceFile:fs/ext4/inode.c SourceLine:2650] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_prepare_extent_to_map SourceFile:fs/ext4/inode.c SourceLine:2728] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_process_folio SourceFile:fs/ext4/inode.c SourceLine:2253] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_process_page_bufs SourceFile:fs/ext4/inode.c SourceLine:2170] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_release_unused_pages SourceFile:fs/ext4/inode.c SourceLine:1721] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_release_unused_pages SourceFile:fs/ext4/inode.c SourceLine:1722] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_submit_partial_folio SourceFile:fs/ext4/inode.c SourceLine:2408] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_write_folio SourceFile:fs/mpage.c SourceLine:460] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_bmap_data_get_key SourceFile:fs/nilfs2/bmap.c SourceLine:437] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_btnode_prepare_change_key SourceFile:fs/nilfs2/btnode.c SourceLine:223] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_dirty_folio SourceFile:fs/nilfs2/inode.c SourceLine:202] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_fiemap SourceFile:fs/nilfs2/inode.c SourceLine:1112] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_find_uncommitted_extent SourceFile:fs/nilfs2/page.c SourceLine:519] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_find_uncommitted_extent SourceFile:fs/nilfs2/page.c SourceLine:538] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_get_block SourceFile:fs/nilfs2/inode.c SourceLine:81] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_get_block SourceFile:fs/nilfs2/inode.c SourceLine:89] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_grab_buffer SourceFile:fs/nilfs2/page.c SourceLine:51] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_mdt_forget_block SourceFile:fs/nilfs2/mdt.c SourceLine:356] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_mdt_forget_block SourceFile:fs/nilfs2/mdt.c SourceLine:371] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_mdt_freeze_buffer SourceFile:fs/nilfs2/mdt.c SourceLine:573] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_mdt_get_frozen_buffer SourceFile:fs/nilfs2/mdt.c SourceLine:616] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_page_mkwrite SourceFile:fs/nilfs2/file.c SourceLine:107] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_palloc_entries_per_group SourceFile:fs/nilfs2/alloc.h SourceLine:30] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_palloc_groups_count SourceFile:fs/nilfs2/alloc.c SourceLine:43] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_recover_dsync_blocks SourceFile:fs/nilfs2/recovery.c SourceLine:543] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nvmet_file_ns_enable SourceFile:drivers/nvme/target/io-cmd-file.c SourceLine:56] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_get_block SourceFile:fs/ocfs2/aops.c SourceLine:132] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_get_block SourceFile:fs/ocfs2/aops.c SourceLine:181] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_write_zero_page SourceFile:fs/ocfs2/file.c SourceLine:768] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:omfs_get_block SourceFile:fs/omfs/file.c SourceLine:229] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:omfs_get_block SourceFile:fs/omfs/file.c SourceLine:259] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:orangefs_inode_getattr SourceFile:fs/orangefs/orangefs-utils.c SourceLine:320] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:read_file_page SourceFile:drivers/md/md-bitmap.c SourceLine:573] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:rock_continue SourceFile:fs/isofs/rock.c SourceLine:88] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:round_up_cluster SourceFile:fs/ext4/xattr.c SourceLine:909] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:set_blocksize SourceFile:block/bdev.c SourceLine:192] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:set_blocksize SourceFile:block/bdev.c SourceLine:209] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:set_init_blocksize SourceFile:block/bdev.c SourceLine:150] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_copy_fi SourceFile:fs/udf/directory.c SourceLine:70] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_extend_file SourceFile:fs/udf/inode.c SourceLine:683] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_extend_file SourceFile:fs/udf/inode.c SourceLine:702] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_add_entry SourceFile:fs/udf/namei.c SourceLine:226] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_add_entry SourceFile:fs/udf/namei.c SourceLine:275] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_add_entry SourceFile:fs/udf/namei.c SourceLine:293] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_advance SourceFile:fs/udf/directory.c SourceLine:293] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_advance_blk SourceFile:fs/udf/directory.c SourceLine:173] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_advance_blk SourceFile:fs/udf/directory.c SourceLine:173] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_append_blk SourceFile:fs/udf/directory.c SourceLine:462] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_append_blk SourceFile:fs/udf/directory.c SourceLine:476] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_init SourceFile:fs/udf/directory.c SourceLine:269] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_load_bhs SourceFile:fs/udf/directory.c SourceLine:196] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_write_fi SourceFile:fs/udf/directory.c SourceLine:412] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_map_block SourceFile:fs/udf/inode.c SourceLine:451] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_readahead_dir SourceFile:fs/udf/directory.c SourceLine:127] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_readahead_dir SourceFile:fs/udf/directory.c SourceLine:135] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_readahead_dir SourceFile:fs/udf/directory.c SourceLine:136] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_verify_fi SourceFile:fs/udf/directory.c SourceLine:42] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ufs_alloc_lastblock SourceFile:fs/ufs/inode.c SourceLine:1054] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ufs_alloc_lastblock SourceFile:fs/ufs/inode.c SourceLine:1060] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ufs_change_blocknr SourceFile:fs/ufs/balloc.c SourceLine:240] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ufs_change_blocknr SourceFile:fs/ufs/balloc.c SourceLine:256] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ufs_change_blocknr SourceFile:fs/ufs/balloc.c SourceLine:258] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ufs_get_locked_folio SourceFile:fs/ufs/util.c SourceLine:221] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:z_erofs_load_compact_lcluster SourceFile:fs/erofs/zmap.c SourceLine:104]] TruncatedOutput:false] |
| 240/4 | 2026/03/16 16:24 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 40267 output: 24 thoughts: 626 Thoughts: **Analyzing Block Size in JFS** Okay, so I'm diving into how JFS handles block size and where `i_blkbits` and the superblock's block size bits (`s_blocksize_bits`) come into play. Initially, I noticed that `i_blkbits` gets read in several key metapage functions within JFS: `__get_metapage`, `__invalidate_metapages`, `jfs_get_block`, `metapage_get_blocks`, `metapage_read_folio`, and `metapage_write_folio`. I couldn't find any direct writes to it within the `fs/jfs/` directory, which piqued my curiosity. Then I realized I missed a critical reference point: `jfs_fill_super`. This is where `i_blkbits` gets initialized, specifically via a call to `inode_init_always` within `new_inode`. Let me circle back to `__get_metapage` for a moment. I see that `l2bsize` is set to `inode->i_blkbits` and the superblock is stored in the `mp->sb` pointer. What I *really* want is the superblock's `s_blocksize_bits`, specifically `mp->sb->s_blocksize_bits`, and I can see that `mp->sb` actually *is* `inode->i_sb`. This means I should be considering `inode->i_sb->s_blocksize_bits`! The crucial question is, are `inode->i_blkbits` and `inode->i_sb->s_blocksize_bits` consistently equal? It appears that they are. My reasoning: 1. `sb->s_blocksize_bits` is set to 12 (4K) and is *never* changed later. 2. Inodes are initialized with `sb->s_blocksize_bits`. 3. JFS code doesn't seem to alter `i_blkbits` after inode initialization. I did briefly think about `direct_inode` - created via `new_inode` -> `alloc_inode` -> `inode_init_always` - but it follows the same initialization process: ```c inode->i_sb = sb; inode->i_blkbits = sb->s_blocksize_bits; ``` The next thing that came to mind was whether `sb_set_blocksize` could be called *after* inodes were created, potentially causing a mismatch. However, in `jfs_fill_super`, `sb_set_blocksize(sb, PSIZE)` is called *before* inode creation via `new_inode`. I also checked `jfs_mount`, specifically `chkSuper`. `chkSuper` verifies `bsize == PSIZE`, and *doesn't* call `sb_set_blocksize` again if it is. To be absolutely certain, I need to check if `sb_set_blocksize` is called *anywhere* else in the JFS code... I should search the entire code base for that call. |
| 241/4 | 2026/03/16 16:24 | tool | codesearch-find-references |
0mArgs:map[Name:sb_set_blocksize] Results: map[References:[map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:adfs_probe SourceFile:fs/adfs/super.c SourceLine:268] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:affs_set_blocksize SourceFile:fs/affs/affs.h SourceLine:231] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:befs_fill_super SourceFile:fs/befs/linuxvfs.c SourceLine:863] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:bfs_fill_super SourceFile:fs/bfs/inode.c SourceLine:345] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:efs_fill_super SourceFile:fs/efs/super.c SourceLine:254] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:erofs_fc_fill_super SourceFile:fs/erofs/super.c SourceLine:688] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:erofs_fc_fill_super SourceFile:fs/erofs/super.c SourceLine:710] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:exfat_calibrate_blocksize SourceFile:fs/exfat/super.c SourceLine:415] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:ext4_load_super SourceFile:fs/ext4/super.c SourceLine:5167] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:f2fs_fill_super SourceFile:fs/f2fs/super.c SourceLine:4966] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:fat_fill_super SourceFile:fs/fat/inode.c SourceLine:1636] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:fuse_fill_super_common SourceFile:fs/fuse/inode.c SourceLine:1842] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:hfs_mdb_get SourceFile:fs/hfs/mdb.c SourceLine:148] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:hfsplus_read_wrapper SourceFile:fs/hfsplus/wrapper.c SourceLine:222] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:hpfs_fill_super SourceFile:fs/hpfs/super.c SourceLine:526] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:init_nilfs SourceFile:fs/nilfs2/the_nilfs.c SourceLine:748] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:init_sb SourceFile:fs/gfs2/ops_fstype.c SourceLine:490] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:isofs_fill_super SourceFile:fs/isofs/inode.c SourceLine:821] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:jfs_fill_super SourceFile:fs/jfs/super.c SourceLine:494] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:minix_fill_super SourceFile:fs/minix/inode.c SourceLine:237] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:minix_fill_super SourceFile:fs/minix/inode.c SourceLine:291] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:ntfs_init_from_boot SourceFile:fs/ntfs3/super.c SourceLine:1184] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:ntfs_init_from_boot SourceFile:fs/ntfs3/super.c SourceLine:1235] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:ocfs2_get_sector SourceFile:fs/ocfs2/super.c SourceLine:1699] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:omfs_fill_super SourceFile:fs/omfs/inode.c SourceLine:481] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:omfs_fill_super SourceFile:fs/omfs/inode.c SourceLine:527] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:qnx4_fill_super SourceFile:fs/qnx4/inode.c SourceLine:205] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:qnx6_fill_super SourceFile:fs/qnx6/inode.c SourceLine:312] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:qnx6_fill_super SourceFile:fs/qnx6/inode.c SourceLine:354] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:qnx6_mmi_fill_super SourceFile:fs/qnx6/super_mmi.c SourceLine:72] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:romfs_fill_super SourceFile:fs/romfs/super.c SourceLine:461] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:sb_min_blocksize SourceFile:block/bdev.c SourceLine:255] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:setup_bdev_super SourceFile:fs/super.c SourceLine:1651] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:udf_load_vrs SourceFile:fs/udf/super.c SourceLine:2005] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:ufs_fill_super SourceFile:fs/ufs/super.c SourceLine:926] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:vxfs_fill_super SourceFile:fs/freevxfs/vxfs_super.c SourceLine:246] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:zonefs_fill_super SourceFile:fs/zonefs/super.c SourceLine:1260]] TruncatedOutput:false] |
| 242/4 | 2026/03/16 16:24 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 43547 output: 42 thoughts: 569 Thoughts: **Analyzing Block Size Consistency in JFS** Okay, so I'm digging into JFS, specifically the block size setup. It seems `sb_set_blocksize` is only called during `jfs_fill_super`, which is a good starting point. But I need to verify something more fundamental: can `i_blkbits` – the block size shift for an inode – ever be *different* from the superblock's `s_blocksize_bits`? In JFS, as I understand it, everything, both data and metadata, *should* be managed in 4K blocks. The `direct_inode` is special; it handles metadata, and then the other inodes are for files and directories. So, the question remains: is it *possible* for `i_blkbits` to diverge from `sb->s_blocksize_bits`? I know some filesystems allow varying block sizes, particularly if they support multiple block sizes or use something like pseudo-inodes. But I can't recall any indication that JFS supports anything *other* than 4K blocks. Right, let me revisit `__get_metapage`. I remember that code: ```c 698: l2bsize = inode->i_blkbits; 699: l2BlocksPerPage = PAGE_SHIFT - l2bsize; 700: page_index = lblock >> l2BlocksPerPage; 701: page_offset = (lblock - (page_index << l2BlocksPerPage)) << l2bsize; ``` This code is crucial. If `inode->i_blkbits` were *different* from `PAGE_SHIFT` (which is typically 12, representing 4K), then `l2BlocksPerPage` would be non-zero. But, if my assumptions are correct, JFS only uses 4K blocks, and `PAGE_SIZE` is also almost always 4K. Although, if `PAGE_SIZE` was 64K, the `l2BlocksPerPage` would equal `16 - 12 = 4`, with the larger page. So, the key question is: Is `mp->sb->s_blocksize_bits` *always* equal to `folio->mapping->host->i_blkbits`? Let's break down that access: * `mp->sb` is the same as `inode->i_sb`. * `mp->folio` comes from `mapping`. * `mapping` is either `inode->i_mapping` or `sbi->direct_inode->i_mapping`. * Therefore, `mp->folio->mapping->host` is *either* `inode` or `sbi->direct_inode`. Both `inode` and `sbi->direct_inode` share the same `i_sb`. And if the system is properly designed, both should have the exact same block size information: `i_blkbits == i_sb->s_blocksize_bits`. However, the question keeps nagging me: *Is* there any case, even an obscure one, where `i_blkbits` *wouldn't* equal `i_sb->s_blocksize_bits`? Let me investigate `jfs_new_inode` to solidify my confidence. |
| 243/4 | 2026/03/16 16:24 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_inode.c IncludeLines:true Name:ialloc] Results: map[SourceCode: 35: /*
36: * NAME: ialloc()
37: *
38: * FUNCTION: Allocate a new inode
39: *
40: */
41: struct inode *ialloc(struct inode *parent, umode_t mode)
42: {
43: struct super_block *sb = parent->i_sb;
44: struct inode *inode;
45: struct jfs_inode_info *jfs_inode;
46: int rc;
47:
48: inode = new_inode(sb);
49: if (!inode) {
50: jfs_warn("ialloc: new_inode returned NULL!");
51: return ERR_PTR(-ENOMEM);
52: }
53:
54: jfs_inode = JFS_IP(inode);
55:
56: rc = diAlloc(parent, S_ISDIR(mode), inode);
57: if (rc) {
58: jfs_warn("ialloc: diAlloc returned %d!", rc);
59: goto fail_put;
60: }
61:
62: if (insert_inode_locked(inode) < 0) {
63: rc = -EINVAL;
64: goto fail_put;
65: }
66:
67: inode_init_owner(&nop_mnt_idmap, inode, parent, mode);
68: /*
69: * New inodes need to save sane values on disk when
70: * uid & gid mount options are used
71: */
72: jfs_inode->saved_uid = inode->i_uid;
73: jfs_inode->saved_gid = inode->i_gid;
74:
75: /*
76: * Allocate inode to quota.
77: */
78: rc = dquot_initialize(inode);
79: if (rc)
80: goto fail_drop;
81: rc = dquot_alloc_inode(inode);
82: if (rc)
83: goto fail_drop;
84:
85: /* inherit flags from parent */
86: jfs_inode->mode2 = JFS_IP(parent)->mode2 & JFS_FL_INHERIT;
87:
88: if (S_ISDIR(mode)) {
89: jfs_inode->mode2 |= IDIRECTORY;
90: jfs_inode->mode2 &= ~JFS_DIRSYNC_FL;
91: }
92: else {
93: jfs_inode->mode2 |= INLINEEA | ISPARSE;
94: if (S_ISLNK(mode))
95: jfs_inode->mode2 &= ~(JFS_IMMUTABLE_FL|JFS_APPEND_FL);
96: }
97: jfs_inode->mode2 |= inode->i_mode;
98:
99: inode->i_blocks = 0;
100: simple_inode_init_ts(inode);
101: jfs_inode->otime = inode_get_ctime_sec(inode);
102: inode->i_generation = JFS_SBI(sb)->gengen++;
103:
104: jfs_inode->cflag = 0;
105:
106: /* Zero remaining fields */
107: memset(&jfs_inode->acl, 0, sizeof(dxd_t));
108: memset(&jfs_inode->ea, 0, sizeof(dxd_t));
109: jfs_inode->next_index = 0;
110: jfs_inode->acltype = 0;
111: jfs_inode->btorder = 0;
112: jfs_inode->btindex = 0;
113: jfs_inode->bxflag = 0;
114: jfs_inode->blid = 0;
115: jfs_inode->atlhead = 0;
116: jfs_inode->atltail = 0;
117: jfs_inode->xtlid = 0;
118: jfs_set_inode_flags(inode);
119:
120: jfs_info("ialloc returns inode = 0x%p", inode);
121:
122: return inode;
123:
124: fail_drop:
125: dquot_drop(inode);
126: inode->i_flags |= S_NOQUOTA;
127: clear_nlink(inode);
128: discard_new_inode(inode);
129: return ERR_PTR(rc);
130:
131: fail_put:
132: iput(inode);
133: return ERR_PTR(rc);
134: }
SourceFile:fs/jfs/jfs_inode.c] |
| 244/4 | 2026/03/16 16:24 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 46303 output: 43 thoughts: 133 Thoughts: **My JFS Inode Generation Journey** Okay, so I'm digging into how inodes are created in the JFS filesystem. The initial thought I have is that `ialloc` calls `new_inode(sb)`, and that's the place where the block size bits are initialized via `i_blkbits = sb->s_blocksize_bits`. That seems straightforward enough, but my expert mind immediately asks... is *that* the only way to get a new inode? Surely there are other paths. Let me refresh my memory. Aha! There's also `jfs_iget`. That goes through `iget_locked`, which in turn, goes to `new_inode`, and then through `inode_init_always`. Got it. So that's another route to a new inode. And I should remember, `new_inode` is used in `jfs_fill_super` to create the `direct_inode`. But... hold on a second. What about those special inodes, like `ipbmap`, `ipaimap`, and others? Those are crucial. How do those come into being? I know they're special! I recall them being read using `diReadSpecial`. Right, I need to check that flow. Let me get to that quickly! |
| 245/4 | 2026/03/16 16:24 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_imap.c IncludeLines:true Name:diReadSpecial] Results: map[SourceCode: 396: /*
397: * NAME: diReadSpecial()
398: *
399: * FUNCTION: initialize a 'special' inode from disk.
400: *
401: * this routines handles aggregate level inodes. The
402: * inode cache cannot differentiate between the
403: * aggregate inodes and the filesystem inodes, so we
404: * handle these here. We don't actually use the aggregate
405: * inode map, since these inodes are at a fixed location
406: * and in some cases the aggregate inode map isn't initialized
407: * yet.
408: *
409: * PARAMETERS:
410: * sb - filesystem superblock
411: * inum - aggregate inode number
412: * secondary - 1 if secondary aggregate inode table
413: *
414: * RETURN VALUES:
415: * new inode - success
416: * NULL - i/o error.
417: */
418: struct inode *diReadSpecial(struct super_block *sb, ino_t inum, int secondary)
419: {
420: struct jfs_sb_info *sbi = JFS_SBI(sb);
421: uint address;
422: struct dinode *dp;
423: struct inode *ip;
424: struct metapage *mp;
425:
426: ip = new_inode(sb);
427: if (ip == NULL) {
428: jfs_err("diReadSpecial: new_inode returned NULL!");
429: return ip;
430: }
431:
432: if (secondary) {
433: address = addressPXD(&sbi->ait2) >> sbi->l2nbperpage;
434: JFS_IP(ip)->ipimap = sbi->ipaimap2;
435: } else {
436: address = AITBL_OFF >> L2PSIZE;
437: JFS_IP(ip)->ipimap = sbi->ipaimap;
438: }
439:
440: ASSERT(inum < INOSPEREXT);
441:
442: ip->i_ino = inum;
443:
444: address += inum >> 3; /* 8 inodes per 4K page */
445:
446: /* read the page of fixed disk inode (AIT) in raw mode */
447: mp = read_metapage(ip, address << sbi->l2nbperpage, PSIZE, 1);
448: if (mp == NULL) {
449: set_nlink(ip, 1); /* Don't want iput() deleting it */
450: iput(ip);
451: return (NULL);
452: }
453:
454: /* get the pointer to the disk inode of interest */
455: dp = (struct dinode *) (mp->data);
456: dp += inum % 8; /* 8 inodes per 4K page */
457:
458: /* copy on-disk inode to in-memory inode */
459: if ((copy_from_dinode(dp, ip) != 0) || (ip->i_nlink == 0)) {
460: /* handle bad return by returning NULL for ip */
461: set_nlink(ip, 1); /* Don't want iput() deleting it */
462: iput(ip);
463: /* release the page */
464: release_metapage(mp);
465: return (NULL);
466:
467: }
468:
469: ip->i_mapping->a_ops = &jfs_metapage_aops;
470: mapping_set_gfp_mask(ip->i_mapping, GFP_NOFS);
471:
472: /* Allocations to metadata inodes should not affect quotas */
473: ip->i_flags |= S_NOQUOTA;
474:
475: if ((inum == FILESYSTEM_I) && (JFS_IP(ip)->ipimap == sbi->ipaimap)) {
476: sbi->gengen = le32_to_cpu(dp->di_gengen);
477: sbi->inostamp = le32_to_cpu(dp->di_inostamp);
478: }
479:
480: /* release the page */
481: release_metapage(mp);
482:
483: inode_fake_hash(ip);
484:
485: return (ip);
486: }
SourceFile:fs/jfs/jfs_imap.c] |
| 246/4 | 2026/03/16 16:24 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 48172 output: 42 thoughts: 468 Thoughts: **Analyzing JFS Block Size and Inode Initialization** Okay, so I'm thinking through how block size is handled in JFS, specifically the relationship between `i_blkbits` in inodes and the superblock's `s_blocksize_bits`. It looks like `diReadSpecial` calls `new_inode(sb)`, so all inodes, initially, are created through `new_inode` or `iget_locked`, which itself uses `new_inode`. That means every inode should have `i_blkbits` initialized from the superblock's `s_blocksize_bits`. Given that `s_blocksize_bits` is set in `jfs_fill_super` and *never* changed afterwards, and that JFS only supports 4K blocks, it follows that `i_blkbits` will consistently be equal to `s_blocksize_bits`. But, what about the mapping? Now I'm thinking about how metapages are handled. Let me refresh my memory; in `__get_metapage`, the superblock for a metapage is set: `mp->sb = inode->i_sb`. That means `mp->sb->s_blocksize_bits` is derived from `inode->i_sb->s_blocksize_bits`. And we know that `folio->mapping->host` is the `inode` (or a `direct_inode`). Therefore, `folio->mapping->host->i_blkbits` is equivalent to `inode->i_blkbits`. Given that `inode->i_blkbits` gets its initial value from `inode->i_sb->s_blocksize_bits` and, as far as I can see, is never altered, these values *should* remain the same. The question then becomes: Is there ever a scenario where `folio->mapping->host` is not from the same filesystem? But no, metapages are invariably associated with an inode from the JFS filesystem, and the `mapping` belongs to a JFS inode's `address_space`. Okay, just to be thorough, could `sb_set_blocksize` ever mess with this after inodes are created? Let me look into that. `sb_set_blocksize` only modifies `sb->s_blocksize` and `sb->s_blocksize_bits`. It does not, crucially, iterate through existing inodes to update `i_blkbits`. And, the only time `sb_set_blocksize` is called by JFS is during the mount itself, so it is unlikely that this becomes a problem during operations. Aha! What if the filesystem is resized? I should investigate `jfs_extendfs` and the implications for the block size. I'll check `fs/jfs/resize.c`. |
| 247/4 | 2026/03/16 16:24 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/resize.c IncludeLines:true Name:jfs_extendfs] Results: map[SourceCode: 29: /*
30: * jfs_extendfs()
31: *
32: * function: extend file system;
33: *
34: * |-------------------------------|----------|----------|
35: * file system space fsck inline log
36: * workspace space
37: *
38: * input:
39: * new LVSize: in LV blocks (required)
40: * new LogSize: in LV blocks (optional)
41: * new FSSize: in LV blocks (optional)
42: *
43: * new configuration:
44: * 1. set new LogSize as specified or default from new LVSize;
45: * 2. compute new FSCKSize from new LVSize;
46: * 3. set new FSSize as MIN(FSSize, LVSize-(LogSize+FSCKSize)) where
47: * assert(new FSSize >= old FSSize),
48: * i.e., file system must not be shrunk;
49: */
50: int jfs_extendfs(struct super_block *sb, s64 newLVSize, int newLogSize)
51: {
52: int rc = 0;
53: struct jfs_sb_info *sbi = JFS_SBI(sb);
54: struct inode *ipbmap = sbi->ipbmap;
55: struct inode *ipbmap2;
56: struct inode *ipimap = sbi->ipimap;
57: struct jfs_log *log = sbi->log;
58: struct bmap *bmp = sbi->bmap;
59: s64 newLogAddress, newFSCKAddress;
60: int newFSCKSize;
61: s64 newMapSize = 0, mapSize;
62: s64 XAddress, XSize, nblocks, xoff, xaddr, t64;
63: s64 oldLVSize;
64: s64 newFSSize;
65: s64 VolumeSize;
66: int newNpages = 0, nPages, newPage, xlen, t32;
67: int tid;
68: int log_formatted = 0;
69: struct inode *iplist[1];
70: struct jfs_superblock *j_sb, *j_sb2;
71: s64 old_agsize;
72: int agsizechanged = 0;
73: struct buffer_head *bh, *bh2;
74:
75: /* If the volume hasn't grown, get out now */
76:
77: if (sbi->mntflag & JFS_INLINELOG)
78: oldLVSize = addressPXD(&sbi->logpxd) + lengthPXD(&sbi->logpxd);
79: else
80: oldLVSize = addressPXD(&sbi->fsckpxd) +
81: lengthPXD(&sbi->fsckpxd);
82:
83: if (oldLVSize >= newLVSize) {
84: printk(KERN_WARNING
85: "jfs_extendfs: volume hasn't grown, returning\n");
86: goto out;
87: }
88:
89: VolumeSize = sb_bdev_nr_blocks(sb);
90: if (VolumeSize) {
91: if (newLVSize > VolumeSize) {
92: printk(KERN_WARNING "jfs_extendfs: invalid size\n");
93: rc = -EINVAL;
94: goto out;
95: }
96: } else {
97: /* check the device */
98: bh = sb_bread(sb, newLVSize - 1);
99: if (!bh) {
100: printk(KERN_WARNING "jfs_extendfs: invalid size\n");
101: rc = -EINVAL;
102: goto out;
103: }
104: bforget(bh);
105: }
106:
107: /* Can't extend write-protected drive */
108:
109: if (isReadOnly(ipbmap)) {
110: printk(KERN_WARNING "jfs_extendfs: read-only file system\n");
111: rc = -EROFS;
112: goto out;
113: }
114:
115: /*
116: * reconfigure LV spaces
117: * ---------------------
118: *
119: * validate new size, or, if not specified, determine new size
120: */
121:
122: /*
123: * reconfigure inline log space:
124: */
125: if ((sbi->mntflag & JFS_INLINELOG)) {
126: if (newLogSize == 0) {
127: /*
128: * no size specified: default to 1/256 of aggregate
129: * size; rounded up to a megabyte boundary;
130: */
131: newLogSize = newLVSize >> 8;
132: t32 = (1 << (20 - sbi->l2bsize)) - 1;
133: newLogSize = (newLogSize + t32) & ~t32;
134: newLogSize =
135: min(newLogSize, MEGABYTE32 >> sbi->l2bsize);
136: } else {
137: /*
138: * convert the newLogSize to fs blocks.
139: *
140: * Since this is given in megabytes, it will always be
141: * an even number of pages.
142: */
143: newLogSize = (newLogSize * MEGABYTE) >> sbi->l2bsize;
144: }
145:
146: } else
147: newLogSize = 0;
148:
149: newLogAddress = newLVSize - newLogSize;
150:
151: /*
152: * reconfigure fsck work space:
153: *
154: * configure it to the end of the logical volume regardless of
155: * whether file system extends to the end of the aggregate;
156: * Need enough 4k pages to cover:
157: * - 1 bit per block in aggregate rounded up to BPERDMAP boundary
158: * - 1 extra page to handle control page and intermediate level pages
159: * - 50 extra pages for the chkdsk service log
160: */
161: t64 = ((newLVSize - newLogSize + BPERDMAP - 1) >> L2BPERDMAP)
162: << L2BPERDMAP;
163: t32 = DIV_ROUND_UP(t64, BITSPERPAGE) + 1 + 50;
164: newFSCKSize = t32 << sbi->l2nbperpage;
165: newFSCKAddress = newLogAddress - newFSCKSize;
166:
167: /*
168: * compute new file system space;
169: */
170: newFSSize = newLVSize - newLogSize - newFSCKSize;
171:
172: /* file system cannot be shrunk */
173: if (newFSSize < bmp->db_mapsize) {
174: rc = -EINVAL;
175: goto out;
176: }
177:
178: /*
179: * If we're expanding enough that the inline log does not overlap
180: * the old one, we can format the new log before we quiesce the
181: * filesystem.
182: */
183: if ((sbi->mntflag & JFS_INLINELOG) && (newLogAddress > oldLVSize)) {
184: if ((rc = lmLogFormat(log, newLogAddress, newLogSize)))
185: goto out;
186: log_formatted = 1;
187: }
188: /*
189: * quiesce file system
190: *
191: * (prepare to move the inline log and to prevent map update)
192: *
193: * block any new transactions and wait for completion of
194: * all wip transactions and flush modified pages s.t.
195: * on-disk file system is in consistent state and
196: * log is not required for recovery.
197: */
198: txQuiesce(sb);
199:
200: /* Reset size of direct inode */
201: sbi->direct_inode->i_size = bdev_nr_bytes(sb->s_bdev);
202:
203: if (sbi->mntflag & JFS_INLINELOG) {
204: /*
205: * deactivate old inline log
206: */
207: lmLogShutdown(log);
208:
209: /*
210: * mark on-disk super block for fs in transition;
211: *
212: * update on-disk superblock for the new space configuration
213: * of inline log space and fsck work space descriptors:
214: * N.B. FS descriptor is NOT updated;
215: *
216: * crash recovery:
217: * logredo(): if FM_EXTENDFS, return to fsck() for cleanup;
218: * fsck(): if FM_EXTENDFS, reformat inline log and fsck
219: * workspace from superblock inline log descriptor and fsck
220: * workspace descriptor;
221: */
222:
223: /* read in superblock */
224: if ((rc = readSuper(sb, &bh)))
225: goto error_out;
226: j_sb = (struct jfs_superblock *)bh->b_data;
227:
228: /* mark extendfs() in progress */
229: j_sb->s_state |= cpu_to_le32(FM_EXTENDFS);
230: j_sb->s_xsize = cpu_to_le64(newFSSize);
231: PXDaddress(&j_sb->s_xfsckpxd, newFSCKAddress);
232: PXDlength(&j_sb->s_xfsckpxd, newFSCKSize);
233: PXDaddress(&j_sb->s_xlogpxd, newLogAddress);
234: PXDlength(&j_sb->s_xlogpxd, newLogSize);
235:
236: /* synchronously update superblock */
237: mark_buffer_dirty(bh);
238: sync_dirty_buffer(bh);
239: brelse(bh);
240:
241: /*
242: * format new inline log synchronously;
243: *
244: * crash recovery: if log move in progress,
245: * reformat log and exit success;
246: */
247: if (!log_formatted)
248: if ((rc = lmLogFormat(log, newLogAddress, newLogSize)))
249: goto error_out;
250:
251: /*
252: * activate new log
253: */
254: log->base = newLogAddress;
255: log->size = newLogSize >> (L2LOGPSIZE - sb->s_blocksize_bits);
256: if ((rc = lmLogInit(log)))
257: goto error_out;
258: }
259:
260: /*
261: * extend block allocation map
262: * ---------------------------
263: *
264: * extendfs() for new extension, retry after crash recovery;
265: *
266: * note: both logredo() and fsck() rebuild map from
267: * the bitmap and configuration parameter from superblock
268: * (disregarding all other control information in the map);
269: *
270: * superblock:
271: * s_size: aggregate size in physical blocks;
272: */
273: /*
274: * compute the new block allocation map configuration
275: *
276: * map dinode:
277: * di_size: map file size in byte;
278: * di_nblocks: number of blocks allocated for map file;
279: * di_mapsize: number of blocks in aggregate (covered by map);
280: * map control page:
281: * db_mapsize: number of blocks in aggregate (covered by map);
282: */
283: newMapSize = newFSSize;
284: /* number of data pages of new bmap file:
285: * roundup new size to full dmap page boundary and
286: * add 1 extra dmap page for next extendfs()
287: */
288: t64 = (newMapSize - 1) + BPERDMAP;
289: newNpages = BLKTODMAPN(t64) + 1;
290:
291: /*
292: * extend map from current map (WITHOUT growing mapfile)
293: *
294: * map new extension with unmapped part of the last partial
295: * dmap page, if applicable, and extra page(s) allocated
296: * at end of bmap by mkfs() or previous extendfs();
297: */
298: extendBmap:
299: /* compute number of blocks requested to extend */
300: mapSize = bmp->db_mapsize;
301: XAddress = mapSize; /* eXtension Address */
302: XSize = newMapSize - mapSize; /* eXtension Size */
303: old_agsize = bmp->db_agsize; /* We need to know if this changes */
304:
305: /* compute number of blocks that can be extended by current mapfile */
306: t64 = dbMapFileSizeToMapSize(ipbmap);
307: if (mapSize > t64) {
308: printk(KERN_ERR "jfs_extendfs: mapSize (0x%Lx) > t64 (0x%Lx)\n",
309: (long long) mapSize, (long long) t64);
310: rc = -EIO;
311: goto error_out;
312: }
313: nblocks = min(t64 - mapSize, XSize);
314:
315: /*
316: * update map pages for new extension:
317: *
318: * update/init dmap and bubble up the control hierarchy
319: * incrementally fold up dmaps into upper levels;
320: * update bmap control page;
321: */
322: if ((rc = dbExtendFS(ipbmap, XAddress, nblocks)))
323: goto error_out;
324:
325: agsizechanged |= (bmp->db_agsize != old_agsize);
326:
327: /*
328: * the map now has extended to cover additional nblocks:
329: * dn_mapsize = oldMapsize + nblocks;
330: */
331: /* ipbmap->i_mapsize += nblocks; */
332: XSize -= nblocks;
333:
334: /*
335: * grow map file to cover remaining extension
336: * and/or one extra dmap page for next extendfs();
337: *
338: * allocate new map pages and its backing blocks, and
339: * update map file xtree
340: */
341: /* compute number of data pages of current bmap file */
342: nPages = ipbmap->i_size >> L2PSIZE;
343:
344: /* need to grow map file ? */
345: if (nPages == newNpages)
346: goto finalizeBmap;
347:
348: /*
349: * grow bmap file for the new map pages required:
350: *
351: * allocate growth at the start of newly extended region;
352: * bmap file only grows sequentially, i.e., both data pages
353: * and possibly xtree index pages may grow in append mode,
354: * s.t. logredo() can reconstruct pre-extension state
355: * by washing away bmap file of pages outside s_size boundary;
356: */
357: /*
358: * journal map file growth as if a regular file growth:
359: * (note: bmap is created with di_mode = IFJOURNAL|IFREG);
360: *
361: * journaling of bmap file growth is not required since
362: * logredo() do/can not use log records of bmap file growth
363: * but it provides careful write semantics, pmap update, etc.;
364: */
365: /* synchronous write of data pages: bmap data pages are
366: * cached in meta-data cache, and not written out
367: * by txCommit();
368: */
369: rc = filemap_fdatawait(ipbmap->i_mapping);
370: if (rc)
371: goto error_out;
372:
373: rc = filemap_write_and_wait(ipbmap->i_mapping);
374: if (rc)
375: goto error_out;
376:
377: diWriteSpecial(ipbmap, 0);
378:
379: newPage = nPages; /* first new page number */
380: xoff = newPage << sbi->l2nbperpage;
381: xlen = (newNpages - nPages) << sbi->l2nbperpage;
382: xlen = min(xlen, (int) nblocks) & ~(sbi->nbperpage - 1);
383: xaddr = XAddress;
384:
385: tid = txBegin(sb, COMMIT_FORCE);
386:
387: if ((rc = xtAppend(tid, ipbmap, 0, xoff, nblocks, &xlen, &xaddr, 0))) {
388: txEnd(tid);
389: goto error_out;
390: }
391: /* update bmap file size */
392: ipbmap->i_size += xlen << sbi->l2bsize;
393: inode_add_bytes(ipbmap, xlen << sbi->l2bsize);
394:
395: iplist[0] = ipbmap;
396: rc = txCommit(tid, 1, &iplist[0], COMMIT_FORCE);
397:
398: txEnd(tid);
399:
400: if (rc)
401: goto error_out;
402:
403: /*
404: * map file has been grown now to cover extension to further out;
405: * di_size = new map file size;
406: *
407: * if huge extension, the previous extension based on previous
408: * map file size may not have been sufficient to cover whole extension
409: * (it could have been used up for new map pages),
410: * but the newly grown map file now covers lot bigger new free space
411: * available for further extension of map;
412: */
413: /* any more blocks to extend ? */
414: if (XSize)
415: goto extendBmap;
416:
417: finalizeBmap:
418: /* finalize bmap */
419: dbFinalizeBmap(ipbmap);
420:
421: /*
422: * update inode allocation map
423: * ---------------------------
424: *
425: * move iag lists from old to new iag;
426: * agstart field is not updated for logredo() to reconstruct
427: * iag lists if system crash occurs.
428: * (computation of ag number from agstart based on agsize
429: * will correctly identify the new ag);
430: */
431: /* if new AG size the same as old AG size, done! */
432: if (agsizechanged) {
433: if ((rc = diExtendFS(ipimap, ipbmap)))
434: goto error_out;
435:
436: /* finalize imap */
437: if ((rc = diSync(ipimap)))
438: goto error_out;
439: }
440:
441: /*
442: * finalize
443: * --------
444: *
445: * extension is committed when on-disk super block is
446: * updated with new descriptors: logredo will recover
447: * crash before it to pre-extension state;
448: */
449:
450: /* sync log to skip log replay of bmap file growth transaction; */
451: /* lmLogSync(log, 1); */
452:
453: /*
454: * synchronous write bmap global control page;
455: * for crash before completion of write
456: * logredo() will recover to pre-extendfs state;
457: * for crash after completion of write,
458: * logredo() will recover post-extendfs state;
459: */
460: if ((rc = dbSync(ipbmap)))
461: goto error_out;
462:
463: /*
464: * copy primary bmap inode to secondary bmap inode
465: */
466:
467: ipbmap2 = diReadSpecial(sb, BMAP_I, 1);
468: if (ipbmap2 == NULL) {
469: printk(KERN_ERR "jfs_extendfs: diReadSpecial(bmap) failed\n");
470: goto error_out;
471: }
472: memcpy(&JFS_IP(ipbmap2)->i_xtroot, &JFS_IP(ipbmap)->i_xtroot, 288);
473: ipbmap2->i_size = ipbmap->i_size;
474: ipbmap2->i_blocks = ipbmap->i_blocks;
475:
476: diWriteSpecial(ipbmap2, 1);
477: diFreeSpecial(ipbmap2);
478:
479: /*
480: * update superblock
481: */
482: if ((rc = readSuper(sb, &bh)))
483: goto error_out;
484: j_sb = (struct jfs_superblock *)bh->b_data;
485:
486: /* mark extendfs() completion */
487: j_sb->s_state &= cpu_to_le32(~FM_EXTENDFS);
488: j_sb->s_size = cpu_to_le64(bmp->db_mapsize <<
489: le16_to_cpu(j_sb->s_l2bfactor));
490: j_sb->s_agsize = cpu_to_le32(bmp->db_agsize);
491:
492: /* update inline log space descriptor */
493: if (sbi->mntflag & JFS_INLINELOG) {
494: PXDaddress(&(j_sb->s_logpxd), newLogAddress);
495: PXDlength(&(j_sb->s_logpxd), newLogSize);
496: }
497:
498: /* record log's mount serial number */
499: j_sb->s_logserial = cpu_to_le32(log->serial);
500:
501: /* update fsck work space descriptor */
502: PXDaddress(&(j_sb->s_fsckpxd), newFSCKAddress);
503: PXDlength(&(j_sb->s_fsckpxd), newFSCKSize);
504: j_sb->s_fscklog = 1;
505: /* sb->s_fsckloglen remains the same */
506:
507: /* Update secondary superblock */
508: bh2 = sb_bread(sb, SUPER2_OFF >> sb->s_blocksize_bits);
509: if (bh2) {
510: j_sb2 = (struct jfs_superblock *)bh2->b_data;
511: memcpy(j_sb2, j_sb, sizeof (struct jfs_superblock));
512:
513: mark_buffer_dirty(bh);
514: sync_dirty_buffer(bh2);
515: brelse(bh2);
516: }
517:
518: /* write primary superblock */
519: mark_buffer_dirty(bh);
520: sync_dirty_buffer(bh);
521: brelse(bh);
522:
523: goto resume;
524:
525: error_out:
526: jfs_error(sb, "\n");
527:
528: resume:
529: /*
530: * resume file system transactions
531: */
532: txResume(sb);
533:
534: out:
535: return rc;
536: }
SourceFile:fs/jfs/resize.c] |
| 248/4 | 2026/03/16 16:24 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 56875 output: 28 thoughts: 155 Thoughts: **JFS Block Size Observations** Okay, so I'm digging into the JFS code and specifically looking at how block sizes are handled during filesystem extension with `jfs_extendfs`. It seems like `jfs_extendfs` isn't touching `sb_set_blocksize`, which is interesting. Instead, it's relying on `sbi->l2bsize` and `sb->s_blocksize_bits`. Hmm, that indicates the block size isn't dynamically modified during extension. Alright, so it looks like the block size is fixed at 4KB, which aligns with the `PSIZE` constant. `sb->s_blocksize_bits` is consistently set to 12. And to confirm this, every inode I've looked at has `i_blkbits` also set to 12. That's pretty clear. Just to be absolutely certain though, I should double check where `i_blkbits` are possibly being set. I should scan `fs/jfs/super.c` for any usage of `s_blocksize_bits` to ensure that there aren't any hidden modifications occurring elsewhere. |
| 249/4 | 2026/03/16 16:24 | tool | codesearch-find-references |
0mArgs:map[Name:super_block::s_blocksize_bits] Results: map[References:[map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__exfat_ent_get SourceFile:fs/exfat/fatent.c SourceLine:45] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__exfat_free_cluster SourceFile:fs/exfat/fatent.c SourceLine:205] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__exfat_free_cluster SourceFile:fs/exfat/fatent.c SourceLine:216] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__exfat_free_cluster SourceFile:fs/exfat/fatent.c SourceLine:245] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__exfat_get_dentry_set SourceFile:fs/exfat/dir.c SourceLine:803] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__ext4_find_entry SourceFile:fs/ext4/namei.c SourceLine:1578] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__ext4_find_entry SourceFile:fs/ext4/namei.c SourceLine:1655] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__ext4_xattr_set_credits SourceFile:fs/ext4/xattr.c SourceLine:979] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__ext4_xattr_set_credits SourceFile:fs/ext4/xattr.c SourceLine:997] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__fat_get_block SourceFile:fs/fat/inode.c SourceLine:140] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__fat_get_block SourceFile:fs/fat/inode.c SourceLine:146] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__fat_get_block SourceFile:fs/fat/inode.c SourceLine:163] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__isofs_iget SourceFile:fs/isofs/inode.c SourceLine:1509] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__isofs_iget SourceFile:fs/isofs/inode.c SourceLine:1515] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:adfs_dir_checkbyte SourceFile:fs/adfs/dir_f.c SourceLine:73] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:adfs_dir_copyfrom SourceFile:fs/adfs/dir.c SourceLine:24] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:adfs_dir_copyto SourceFile:fs/adfs/dir.c SourceLine:49] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:adfs_dir_read_buffers SourceFile:fs/adfs/dir.c SourceLine:105] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:adfs_f_read SourceFile:fs/adfs/dir_f.c SourceLine:144] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:adfs_iget SourceFile:fs/adfs/inode.c SourceLine:260] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:adfs_map_statfs SourceFile:fs/adfs/map.c SourceLine:211] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:adfs_probe SourceFile:fs/adfs/super.c SourceLine:276] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:adfs_probe SourceFile:fs/adfs/super.c SourceLine:280] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:afs_fill_super SourceFile:fs/afs/super.c SourceLine:458] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:autofs_fill_super SourceFile:fs/autofs/inode.c SourceLine:312] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:binderfs_fill_super SourceFile:drivers/android/binderfs.c SourceLine:622] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_to_path SourceFile:fs/minix/itree_v2.c SourceLine:42] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_to_path SourceFile:fs/minix/itree_v2.c SourceLine:45] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_to_path SourceFile:fs/minix/itree_v2.c SourceLine:45] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_to_path SourceFile:fs/minix/itree_v2.c SourceLine:45] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_to_path SourceFile:fs/minix/itree_v2.c SourceLine:47] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_to_path SourceFile:fs/minix/itree_v2.c SourceLine:48] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_to_path SourceFile:fs/minix/itree_v2.c SourceLine:50] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_to_path SourceFile:fs/minix/itree_v2.c SourceLine:50] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_to_path SourceFile:fs/minix/itree_v2.c SourceLine:52] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_to_path SourceFile:fs/minix/itree_v2.c SourceLine:52] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_to_path SourceFile:fs/minix/itree_v2.c SourceLine:53] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_to_path SourceFile:fs/minix/itree_v2.c SourceLine:53] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_to_path SourceFile:fs/minix/itree_v2.c SourceLine:54] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:bytes_to_block SourceFile:fs/ntfs3/ntfs_fs.h SourceLine:1119] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:cifs_read_super SourceFile:fs/smb/client/cifsfs.c SourceLine:296] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:configfs_fill_super SourceFile:fs/configfs/mount.c SourceLine:69] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:copy_from_dinode SourceFile:fs/jfs/jfs_imap.c SourceLine:3088] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:copy_to_dinode SourceFile:fs/jfs/jfs_imap.c SourceLine:3137] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dataLog SourceFile:fs/jfs/jfs_txnmgr.c SourceLine:1538] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:devpts_fill_super SourceFile:fs/devpts/inode.c SourceLine:367] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:diLog SourceFile:fs/jfs/jfs_txnmgr.c SourceLine:1452] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:dlmfs_fill_super SourceFile:fs/ocfs2/dlmfs/dlmfs.c SourceLine:512] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:do_isofs_readdir SourceFile:fs/isofs/dir.c SourceLine:88] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:do_perf_trace_erofs_fill_inode SourceFile:include/trace/events/erofs.h SourceLine:60] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:do_trace_event_raw_event_erofs_fill_inode SourceFile:include/trace/events/erofs.h SourceLine:60] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dtLog SourceFile:fs/jfs/jfs_txnmgr.c SourceLine:1589] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dtLog SourceFile:fs/jfs/jfs_txnmgr.c SourceLine:1618] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ea_get SourceFile:fs/jfs/xattr.c SourceLine:476] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ea_get SourceFile:fs/jfs/xattr.c SourceLine:506] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ea_get SourceFile:fs/jfs/xattr.c SourceLine:528] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ea_get SourceFile:fs/jfs/xattr.c SourceLine:549] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ea_read SourceFile:fs/jfs/xattr.c SourceLine:393] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ea_read SourceFile:fs/jfs/xattr.c SourceLine:394] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ea_write SourceFile:fs/jfs/xattr.c SourceLine:225] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ea_write SourceFile:fs/jfs/xattr.c SourceLine:253] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ea_write SourceFile:fs/jfs/xattr.c SourceLine:254] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:erofs_fc_fill_super SourceFile:fs/erofs/super.c SourceLine:677] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_fc_fill_super SourceFile:fs/erofs/super.c SourceLine:701] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:erofs_fc_fill_super SourceFile:fs/erofs/super.c SourceLine:709] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_find_target_block SourceFile:fs/erofs/namei.c SourceLine:103] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_iloc SourceFile:fs/erofs/internal.h SourceLine:340] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_map_blocks SourceFile:fs/erofs/data.c SourceLine:110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_map_blocks SourceFile:fs/erofs/data.c SourceLine:114] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_map_blocks SourceFile:fs/erofs/data.c SourceLine:150] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_map_blocks SourceFile:fs/erofs/data.c SourceLine:156] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_map_dev SourceFile:fs/erofs/data.c SourceLine:203] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_map_dev SourceFile:fs/erofs/data.c SourceLine:215] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_map_dev SourceFile:fs/erofs/data.c SourceLine:217] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_read_inode SourceFile:fs/erofs/inode.c SourceLine:39] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_read_inode SourceFile:fs/erofs/inode.c SourceLine:52] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_read_inode SourceFile:fs/erofs/inode.c SourceLine:91] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_read_inode SourceFile:fs/erofs/inode.c SourceLine:195] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_read_inode SourceFile:fs/erofs/inode.c SourceLine:207] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_readdir SourceFile:fs/erofs/dir.c SourceLine:84] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_xattr_iter_shared SourceFile:fs/erofs/xattr.c SourceLine:313] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_allocate_bitmap SourceFile:fs/exfat/balloc.c SourceLine:98] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_allocate_bitmap SourceFile:fs/exfat/balloc.c SourceLine:105] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_clear_bitmap SourceFile:fs/exfat/balloc.c SourceLine:213] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_create_upcase_table SourceFile:fs/exfat/nls.c SourceLine:744] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_dir_readahead SourceFile:fs/exfat/dir.c SourceLine:631] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_dir_readahead SourceFile:fs/exfat/dir.c SourceLine:632] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_ent_set SourceFile:fs/exfat/fatent.c SourceLine:76] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_find_free_bitmap SourceFile:fs/exfat/balloc.c SourceLine:265] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_find_location SourceFile:fs/exfat/dir.c SourceLine:621] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_get_block SourceFile:fs/exfat/inode.c SourceLine:264] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_get_block SourceFile:fs/exfat/inode.c SourceLine:306] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_get_block SourceFile:fs/exfat/inode.c SourceLine:308] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_get_block SourceFile:fs/exfat/inode.c SourceLine:323] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_get_block SourceFile:fs/exfat/inode.c SourceLine:353] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_get_block SourceFile:fs/exfat/inode.c SourceLine:384] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_get_dentry_cached SourceFile:fs/exfat/dir.c SourceLine:734] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_iterate SourceFile:fs/exfat/dir.c SourceLine:251] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_set_bitmap SourceFile:fs/exfat/balloc.c SourceLine:195] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_test_bitmap SourceFile:fs/exfat/balloc.c SourceLine:239] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_test_bitmap_range SourceFile:fs/exfat/balloc.c SourceLine:46] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_validate_empty_dentry_set SourceFile:fs/exfat/dir.c SourceLine:894] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_validate_empty_dentry_set SourceFile:fs/exfat/dir.c SourceLine:903] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_zeroed_cluster SourceFile:fs/exfat/fatent.c SourceLine:361] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_zeroed_cluster SourceFile:fs/exfat/fatent.c SourceLine:362] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_add_entry SourceFile:fs/ext4/namei.c SourceLine:2413] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_append SourceFile:fs/ext4/namei.c SourceLine:66] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_block_group_meta_init SourceFile:fs/ext4/super.c SourceLine:5248] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_block_group_meta_init SourceFile:fs/ext4/super.c SourceLine:5250] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_blocks_for_truncate SourceFile:fs/ext4/truncate.h SourceLine:34] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_calculate_overhead SourceFile:fs/ext4/super.c SourceLine:4245] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_check_geometry SourceFile:fs/ext4/super.c SourceLine:4798] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_empty_dir SourceFile:fs/ext4/namei.c SourceLine:3106] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_ext_convert_to_initialized SourceFile:fs/ext4/extents.c SourceLine:3551] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_ext_convert_to_initialized SourceFile:fs/ext4/extents.c SourceLine:3697] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_ext_replay_set_iblocks SourceFile:fs/ext4/extents.c SourceLine:6181] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_ext_truncate SourceFile:fs/ext4/extents.c SourceLine:4533] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_fc_replay_add_range SourceFile:fs/ext4/fast_commit.c SourceLine:1871] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_fc_replay_del_range SourceFile:fs/ext4/fast_commit.c SourceLine:1934] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_file_getattr SourceFile:fs/ext4/inode.c SourceLine:6108] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_fill_es_cache_info SourceFile:fs/ext4/extents.c SourceLine:2212] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_fsmap_from_internal SourceFile:fs/ext4/fsmap.c SourceLine:21] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_fsmap_from_internal SourceFile:fs/ext4/fsmap.c SourceLine:24] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_fsmap_to_internal SourceFile:fs/ext4/fsmap.c SourceLine:36] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_fsmap_to_internal SourceFile:fs/ext4/fsmap.c SourceLine:38] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_get_es_cache SourceFile:fs/ext4/extents.c SourceLine:5226] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_get_es_cache SourceFile:fs/ext4/extents.c SourceLine:5227] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_get_group_number SourceFile:fs/ext4/balloc.c SourceLine:44] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_group_extend SourceFile:fs/ext4/resize.c SourceLine:1853] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_ind_remove_space SourceFile:fs/ext4/indirect.c SourceLine:1243] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_ind_truncate SourceFile:fs/ext4/indirect.c SourceLine:1126] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_ind_truncate SourceFile:fs/ext4/indirect.c SourceLine:1128] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_init_orphan_info SourceFile:fs/ext4/orphan.c SourceLine:599] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_inline_data_iomap SourceFile:fs/ext4/inline.c SourceLine:1818] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_iomap_xattr_fiemap SourceFile:fs/ext4/extents.c SourceLine:5086] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_add_groupinfo SourceFile:fs/ext4/mballoc.c SourceLine:3429] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_avg_fragment_size_destroy SourceFile:fs/ext4/mballoc.c SourceLine:3693] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_good_group SourceFile:fs/ext4/mballoc.c SourceLine:2735] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_group_or_file SourceFile:fs/ext4/mballoc.c SourceLine:5839] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_init SourceFile:fs/ext4/mballoc.c SourceLine:3720] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_init SourceFile:fs/ext4/mballoc.c SourceLine:3728] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_init SourceFile:fs/ext4/mballoc.c SourceLine:3745] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_init SourceFile:fs/ext4/mballoc.c SourceLine:3754] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_init SourceFile:fs/ext4/mballoc.c SourceLine:3757] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_init SourceFile:fs/ext4/mballoc.c SourceLine:3762] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_init SourceFile:fs/ext4/mballoc.c SourceLine:3766] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_init SourceFile:fs/ext4/mballoc.c SourceLine:3771] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_init_backend SourceFile:fs/ext4/mballoc.c SourceLine:3545] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_init_backend SourceFile:fs/ext4/mballoc.c SourceLine:3568] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_init_cache SourceFile:fs/ext4/mballoc.c SourceLine:1488] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_largest_free_orders_destroy SourceFile:fs/ext4/mballoc.c SourceLine:3705] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_load_buddy_gfp SourceFile:fs/ext4/mballoc.c SourceLine:1691] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_mark_free_simple SourceFile:fs/ext4/mballoc.c SourceLine:819] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_normalize_request SourceFile:fs/ext4/mballoc.c SourceLine:4547] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_regular_allocator SourceFile:fs/ext4/mballoc.c SourceLine:3027] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_regular_allocator SourceFile:fs/ext4/mballoc.c SourceLine:3029] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_regular_allocator SourceFile:fs/ext4/mballoc.c SourceLine:3029] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_release SourceFile:fs/ext4/mballoc.c SourceLine:3889] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_scan_groups_best_avail SourceFile:fs/ext4/mballoc.c SourceLine:1077] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_scan_groups_best_avail SourceFile:fs/ext4/mballoc.c SourceLine:1078] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_scan_groups_goal_fast SourceFile:fs/ext4/mballoc.c SourceLine:1022] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_scan_groups_p2_aligned SourceFile:fs/ext4/mballoc.c SourceLine:971] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_seq_groups_show SourceFile:fs/ext4/mballoc.c SourceLine:3145] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_seq_structs_summary_next SourceFile:fs/ext4/mballoc.c SourceLine:3321] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_seq_structs_summary_show SourceFile:fs/ext4/mballoc.c SourceLine:3337] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_seq_structs_summary_show SourceFile:fs/ext4/mballoc.c SourceLine:3338] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_seq_structs_summary_start SourceFile:fs/ext4/mballoc.c SourceLine:3309] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_simple_scan_group SourceFile:fs/ext4/mballoc.c SourceLine:2508] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_protect_reserved_inode SourceFile:fs/ext4/block_validity.c SourceLine:163] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_quota_read SourceFile:fs/ext4/super.c SourceLine:7305] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_quota_write SourceFile:fs/ext4/super.c SourceLine:7341] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_readdir SourceFile:fs/ext4/dir.c SourceLine:184] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_set_max_mapping_order SourceFile:fs/ext4/super.c SourceLine:5069] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_setattr SourceFile:fs/ext4/inode.c SourceLine:5908] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_setattr SourceFile:fs/ext4/inode.c SourceLine:5914] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_setattr SourceFile:fs/ext4/inode.c SourceLine:5916] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_split_convert_extents SourceFile:fs/ext4/extents.c SourceLine:3808] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_split_extent SourceFile:fs/ext4/extents.c SourceLine:3466] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_statfs_project SourceFile:fs/ext4/super.c SourceLine:6885] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_statfs_project SourceFile:fs/ext4/super.c SourceLine:6891] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_trim_fs SourceFile:fs/ext4/mballoc.c SourceLine:7055] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_trim_fs SourceFile:fs/ext4/mballoc.c SourceLine:7056] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_trim_fs SourceFile:fs/ext4/mballoc.c SourceLine:7057] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_trim_fs SourceFile:fs/ext4/mballoc.c SourceLine:7066] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_trim_fs SourceFile:fs/ext4/mballoc.c SourceLine:7129] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_zero_partial_blocks SourceFile:fs/ext4/inode.c SourceLine:4165] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_zero_partial_blocks SourceFile:fs/ext4/inode.c SourceLine:4166] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:extent_trunc SourceFile:fs/udf/truncate.c SourceLine:31] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:extent_trunc SourceFile:fs/udf/truncate.c SourceLine:33] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:f2fs_inline_data_fiemap SourceFile:fs/f2fs/inline.c SourceLine:828] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:f2fs_statfs_project SourceFile:fs/f2fs/super.c SourceLine:2178] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:f2fs_statfs_project SourceFile:fs/f2fs/super.c SourceLine:2184] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fat12_ent_blocknr SourceFile:fs/fat/fatent.c SourceLine:30] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fat__get_entry SourceFile:fs/fat/dir.c SourceLine:95] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fat__get_entry SourceFile:fs/fat/dir.c SourceLine:107] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fat_bmap SourceFile:fs/fat/cache.c SourceLine:374] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fat_ent_blocknr SourceFile:fs/fat/fatent.c SourceLine:40] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fat_get_block SourceFile:fs/fat/inode.c SourceLine:193] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fat_get_block_bmap SourceFile:fs/fat/inode.c SourceLine:311] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fat_get_mapped_cluster SourceFile:fs/fat/cache.c SourceLine:313] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fat_ra_init SourceFile:fs/fat/fatent.c SourceLine:667] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:ffs_sb_fill SourceFile:drivers/usb/gadget/function/f_fs.c SourceLine:1929] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fuse_change_attributes_common SourceFile:fs/fuse/inode.c SourceLine:294] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:fuse_fill_super_common SourceFile:fs/fuse/inode.c SourceLine:1848] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:fuse_fill_super_submount SourceFile:fs/fuse/inode.c SourceLine:1737] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fuse_fill_super_submount SourceFile:fs/fuse/inode.c SourceLine:1737] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fuse_fillattr SourceFile:fs/fuse/dir.c SourceLine:1388] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:gadgetfs_fill_super SourceFile:drivers/usb/gadget/legacy/inode.c SourceLine:2036] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_fill_super SourceFile:fs/gfs2/ops_fstype.c SourceLine:1155] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_read_super SourceFile:fs/gfs2/ops_fstype.c SourceLine:228] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfs_bmap_reserve SourceFile:fs/hfs/btree.c SourceLine:276] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfs_file_truncate SourceFile:fs/hfs/extent.c SourceLine:549] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfs_file_truncate SourceFile:fs/hfs/extent.c SourceLine:550] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfs_inode_read_fork SourceFile:fs/hfs/inode.c SourceLine:312] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfs_inode_read_fork SourceFile:fs/hfs/inode.c SourceLine:313] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfs_mdb_commit SourceFile:fs/hfs/mdb.c SourceLine:340] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfs_mdb_get SourceFile:fs/hfs/mdb.c SourceLine:114] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfs_mdb_get SourceFile:fs/hfs/mdb.c SourceLine:153] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfs_mdb_get SourceFile:fs/hfs/mdb.c SourceLine:167] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfs_mdb_get SourceFile:fs/hfs/mdb.c SourceLine:173] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfs_mdb_get SourceFile:fs/hfs/mdb.c SourceLine:190] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfs_mdb_get SourceFile:fs/hfs/mdb.c SourceLine:214] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfs_part_find SourceFile:fs/hfs/part_tbl.c SourceLine:66] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfs_part_find SourceFile:fs/hfs/part_tbl.c SourceLine:105] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfsplus_file_truncate SourceFile:fs/hfsplus/extents.c SourceLine:624] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfsplus_file_truncate SourceFile:fs/hfsplus/extents.c SourceLine:625] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfsplus_inode_read_fork SourceFile:fs/hfsplus/inode.c SourceLine:530] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfsplus_inode_read_fork SourceFile:fs/hfsplus/inode.c SourceLine:531] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfsplus_read_wrapper SourceFile:fs/hfsplus/wrapper.c SourceLine:228] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hfsplus_read_wrapper SourceFile:fs/hfsplus/wrapper.c SourceLine:231] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:hugetlbfs_fill_super SourceFile:fs/hugetlbfs/inode.c SourceLine:1437] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:init_mount_fs_info SourceFile:fs/btrfs/disk-io.c SourceLine:2906] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:init_nilfs SourceFile:fs/nilfs2/the_nilfs.c SourceLine:762] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:init_nilfs SourceFile:fs/nilfs2/the_nilfs.c SourceLine:769] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:inode_bmap SourceFile:fs/udf/inode.c SourceLine:2433] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:inode_getblk SourceFile:fs/udf/inode.c SourceLine:746] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:inode_getblk SourceFile:fs/udf/inode.c SourceLine:787] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:inode_getblk SourceFile:fs/udf/inode.c SourceLine:793] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:inode_init_always_gfp SourceFile:fs/inode.c SourceLine:235] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:is_dx_dir SourceFile:fs/ext4/dir.c SourceLine:52] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:is_exceed_eof SourceFile:fs/fat/cache.c SourceLine:333] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:isofs_fill_super SourceFile:fs/isofs/inode.c SourceLine:631] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:isofs_find_entry SourceFile:fs/isofs/namei.c SourceLine:41] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:isofs_get_blocks SourceFile:fs/isofs/inode.c SourceLine:1041] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:isofs_get_blocks SourceFile:fs/isofs/inode.c SourceLine:1054] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:isofs_get_blocks SourceFile:fs/isofs/inode.c SourceLine:1075] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:isofs_read_inode SourceFile:fs/isofs/inode.c SourceLine:1316] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:isofs_statfs SourceFile:fs/isofs/inode.c SourceLine:1006] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jbd2_journal_blocks_per_folio SourceFile:fs/jbd2/journal.c SourceLine:2678] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jbd2_journal_init_inode SourceFile:fs/jbd2/journal.c SourceLine:1680] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jbd2_journal_init_inode SourceFile:fs/jbd2/journal.c SourceLine:1685] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:jffs2_do_fill_super SourceFile:fs/jffs2/fs.c SourceLine:594] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_extendfs SourceFile:fs/jfs/resize.c SourceLine:255] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_extendfs SourceFile:fs/jfs/resize.c SourceLine:508] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_get_block SourceFile:fs/jfs/inode.c SourceLine:223] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_get_block SourceFile:fs/jfs/inode.c SourceLine:252] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_ioc_trim SourceFile:fs/jfs/jfs_discard.c SourceLine:80] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_ioc_trim SourceFile:fs/jfs/jfs_discard.c SourceLine:81] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_ioc_trim SourceFile:fs/jfs/jfs_discard.c SourceLine:82] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_ioc_trim SourceFile:fs/jfs/jfs_discard.c SourceLine:111] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_quota_read SourceFile:fs/jfs/super.c SourceLine:698] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_quota_write SourceFile:fs/jfs/super.c SourceLine:742] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:kernfs_fill_super SourceFile:fs/kernfs/mount.c SourceLine:294] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mb_avg_fragment_size_order SourceFile:fs/ext4/mballoc.c SourceLine:854] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mb_avg_fragment_size_order SourceFile:fs/ext4/mballoc.c SourceLine:856] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mb_avg_fragment_size_order SourceFile:fs/ext4/mballoc.c SourceLine:857] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mb_regenerate_buddy SourceFile:fs/ext4/mballoc.c SourceLine:1344] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mb_set_largest_free_order SourceFile:fs/ext4/mballoc.c SourceLine:1248] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:minix_free_block SourceFile:fs/minix/bitmap.c SourceLine:47] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:minix_free_inode SourceFile:fs/minix/bitmap.c SourceLine:188] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:mqueue_fill_super SourceFile:ipc/mqueue.c SourceLine:410] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nblocks SourceFile:fs/minix/itree_common.c SourceLine:359] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:nfs_fill_super SourceFile:fs/nfs/super.c SourceLine:1072] map[ReferenceKind:takes-address-of ReferencingEntityKind:function ReferencingEntityName:nfs_fill_super SourceFile:fs/nfs/super.c SourceLine:1078] map[ReferenceKind:takes-address-of ReferencingEntityKind:function ReferencingEntityName:nfs_fill_super SourceFile:fs/nfs/super.c SourceLine:1115] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nfs_statfs SourceFile:fs/nfs/super.c SourceLine:288] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_truncate SourceFile:fs/nilfs2/inode.c SourceLine:810] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ntfs_bread SourceFile:fs/ntfs3/fsntfs.c SourceLine:1043] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ntfs_bread SourceFile:fs/ntfs3/fsntfs.c SourceLine:1052] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ntfs_bread_run SourceFile:fs/ntfs3/fsntfs.c SourceLine:1181] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ntfs_get_bh SourceFile:fs/ntfs3/fsntfs.c SourceLine:1362] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ntfs_init_from_boot SourceFile:fs/ntfs3/super.c SourceLine:1187] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ntfs_init_from_boot SourceFile:fs/ntfs3/super.c SourceLine:1188] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ntfs_read_run_nb_ra SourceFile:fs/ntfs3/fsntfs.c SourceLine:1235] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ntfs_sb_write SourceFile:fs/ntfs3/fsntfs.c SourceLine:1062] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ntfs_trim_fs SourceFile:fs/ntfs3/bitmap.c SourceLine:1429] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ntfs_unmap_meta SourceFile:fs/ntfs3/super.c SourceLine:1728] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ntfs_update_mftmirr SourceFile:fs/ntfs3/fsntfs.c SourceLine:884] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ntfs_update_mftmirr SourceFile:fs/ntfs3/fsntfs.c SourceLine:885] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:nullfs_fs_fill_super SourceFile:fs/nullfs.c SourceLine:17] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_acquire_dquot SourceFile:fs/ocfs2/quota_global.c SourceLine:854] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_align_bytes_to_blocks SourceFile:fs/ocfs2/ocfs2.h SourceLine:865] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_backup_super_blkno SourceFile:fs/ocfs2/ocfs2_fs.h SourceLine:1400] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_block_to_cluster_start SourceFile:fs/ocfs2/ocfs2.h SourceLine:842] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_blocks_for_bytes SourceFile:fs/ocfs2/ocfs2.h SourceLine:830] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_blocks_per_xattr_bucket SourceFile:fs/ocfs2/xattr.c SourceLine:313] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_blocks_to_clusters SourceFile:fs/ocfs2/ocfs2.h SourceLine:798] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_bucket_align_free_start SourceFile:fs/ocfs2/xattr.c SourceLine:1786] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_bucket_align_free_start SourceFile:fs/ocfs2/xattr.c SourceLine:1787] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_clusters_for_blocks SourceFile:fs/ocfs2/ocfs2.h SourceLine:788] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_clusters_to_blocks SourceFile:fs/ocfs2/ocfs2.h SourceLine:779] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_create_symlink_data SourceFile:fs/ocfs2/namei.c SourceLine:1726] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_create_symlink_data SourceFile:fs/ocfs2/namei.c SourceLine:1756] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_dio_wr_get_block SourceFile:fs/ocfs2/aops.c SourceLine:2125] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_dir_foreach_blk_el SourceFile:fs/ocfs2/dir.c SourceLine:1868] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_dir_foreach_blk_el SourceFile:fs/ocfs2/dir.c SourceLine:1882] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_expand_inline_dir SourceFile:fs/ocfs2/dir.c SourceLine:2827] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_fiemap SourceFile:fs/ocfs2/extent_map.c SourceLine:809] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_fiemap_inline SourceFile:fs/ocfs2/extent_map.c SourceLine:731] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_find_dir_space_el SourceFile:fs/ocfs2/dir.c SourceLine:3501] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_find_entry_el SourceFile:fs/ocfs2/dir.c SourceLine:700] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_find_entry_el SourceFile:fs/ocfs2/dir.c SourceLine:745] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_find_entry_el SourceFile:fs/ocfs2/dir.c SourceLine:767] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_find_victim_alloc_group SourceFile:fs/ocfs2/move_extents.c SourceLine:404] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_global_read_info SourceFile:fs/ocfs2/quota_global.c SourceLine:403] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_initialize_super SourceFile:fs/ocfs2/super.c SourceLine:2157] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_quota_read SourceFile:fs/ocfs2/quota_global.c SourceLine:174] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_quota_write SourceFile:fs/ocfs2/quota_global.c SourceLine:223] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_read_quota_block SourceFile:fs/ocfs2/quota_local.c SourceLine:140] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_read_virt_blocks SourceFile:fs/ocfs2/extent_map.c SourceLine:987] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_read_virt_blocks SourceFile:fs/ocfs2/extent_map.c SourceLine:1011] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_statfs SourceFile:fs/ocfs2/super.c SourceLine:1593] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_statfs SourceFile:fs/ocfs2/super.c SourceLine:1595] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_symlink_get_block SourceFile:fs/ocfs2/aops.c SourceLine:56] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_validate_and_adjust_move_goal SourceFile:fs/ocfs2/move_extents.c SourceLine:480] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_validate_inode_block SourceFile:fs/ocfs2/inode.c SourceLine:1521] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_write_cluster SourceFile:fs/ocfs2/aops.c SourceLine:1163] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_write_cluster SourceFile:fs/ocfs2/aops.c SourceLine:1170] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_xa_bucket_fill_value_buf SourceFile:fs/ocfs2/xattr.c SourceLine:1901] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_xa_bucket_fill_value_buf SourceFile:fs/ocfs2/xattr.c SourceLine:1904] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_xa_bucket_offset_pointer SourceFile:fs/ocfs2/xattr.c SourceLine:1760] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_xattr_bucket_get_name_value SourceFile:fs/ocfs2/xattr.c SourceLine:4072] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_zero_dinode_id2_with_xattr SourceFile:fs/ocfs2/alloc.c SourceLine:7038] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_zero_range_for_truncate SourceFile:fs/ocfs2/alloc.c SourceLine:6995] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ol_dqblk_block_offset SourceFile:fs/ocfs2/quota_local.c SourceLine:79] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ol_dqblk_chunk_off SourceFile:fs/ocfs2/quota_local.c SourceLine:87] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ol_dqblk_chunk_off SourceFile:fs/ocfs2/quota_local.c SourceLine:89] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ol_dqblk_off SourceFile:fs/ocfs2/quota_local.c SourceLine:73] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:open_ctree SourceFile:fs/btrfs/disk-io.c SourceLine:3456] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:open_inline_log SourceFile:fs/jfs/jfs_logmgr.c SourceLine:1168] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:open_inline_log SourceFile:fs/jfs/jfs_logmgr.c SourceLine:1169] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:open_inline_log SourceFile:fs/jfs/jfs_logmgr.c SourceLine:1170] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:orangefs_fill_sb SourceFile:fs/orangefs/super.c SourceLine:422] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:proc_fill_super SourceFile:fs/proc/root.c SourceLine:263] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:pseudo_fs_fill_super SourceFile:fs/libfs.c SourceLine:684] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:pstore_fill_super SourceFile:fs/pstore/inode.c SourceLine:418] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:qnx6_fill_super SourceFile:fs/qnx6/inode.c SourceLine:360] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:qnx6_fill_super SourceFile:fs/qnx6/inode.c SourceLine:367] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:qnx6_fill_super SourceFile:fs/qnx6/inode.c SourceLine:368] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:qnx6_fill_super SourceFile:fs/qnx6/inode.c SourceLine:371] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:qnx6_fill_super SourceFile:fs/qnx6/inode.c SourceLine:372] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:qnx6_longname SourceFile:fs/qnx6/dir.c SourceLine:54] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:qnx6_longname SourceFile:fs/qnx6/dir.c SourceLine:61] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:ramfs_fill_super SourceFile:fs/ramfs/inode.c SourceLine:267] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:readSuper SourceFile:fs/jfs/jfs_mount.c SourceLine:461] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:readSuper SourceFile:fs/jfs/jfs_mount.c SourceLine:466] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:read_block_bitmap SourceFile:fs/udf/balloc.c SourceLine:59] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:romfs_fill_super SourceFile:fs/romfs/super.c SourceLine:467] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:rpc_fill_super SourceFile:net/sunrpc/rpc_pipe.c SourceLine:1126] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:sb_bdev_nr_blocks SourceFile:include/linux/blkdev.h SourceLine:853] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:sb_issue_discard SourceFile:include/linux/blkdev.h SourceLine:1281] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:sb_issue_discard SourceFile:include/linux/blkdev.h SourceLine:1283] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:sb_issue_zeroout SourceFile:include/linux/blkdev.h SourceLine:1291] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:sb_issue_zeroout SourceFile:include/linux/blkdev.h SourceLine:1293] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:sb_set_blocksize SourceFile:block/bdev.c SourceLine:244] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:scan_nres_bitmap SourceFile:fs/ntfs3/index.c SourceLine:374] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:scan_nres_bitmap SourceFile:fs/ntfs3/index.c SourceLine:375] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:scan_nres_bitmap SourceFile:fs/ntfs3/index.c SourceLine:420] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:shmem_fill_super SourceFile:mm/shmem.c SourceLine:5089] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:simple_fill_super SourceFile:fs/libfs.c SourceLine:1045] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:skip_hole SourceFile:fs/ext4/extents.c SourceLine:6090] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:truncate SourceFile:fs/minix/itree_common.c SourceLine:312] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:ubifs_fill_super SourceFile:fs/ubifs/super.c SourceLine:2177] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_bitmap_free_blocks SourceFile:fs/udf/balloc.c SourceLine:140] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_bitmap_new_block SourceFile:fs/udf/balloc.c SourceLine:244] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_bitmap_new_block SourceFile:fs/udf/balloc.c SourceLine:328] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_bitmap_prealloc_blocks SourceFile:fs/udf/balloc.c SourceLine:196] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_check_vsd SourceFile:fs/udf/super.c SourceLine:754] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_check_vsd SourceFile:fs/udf/super.c SourceLine:757] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_check_vsd SourceFile:fs/udf/super.c SourceLine:772] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_count_free_table SourceFile:fs/udf/super.c SourceLine:2500] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_extend_file SourceFile:fs/udf/inode.c SourceLine:656] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_free_blocks SourceFile:fs/udf/balloc.c SourceLine:683] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_load_sequence SourceFile:fs/udf/super.c SourceLine:1871] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_load_sequence SourceFile:fs/udf/super.c SourceLine:1877] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_merge_extents SourceFile:fs/udf/inode.c SourceLine:1118] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_prealloc_blocks SourceFile:fs/udf/balloc.c SourceLine:709] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_prealloc_extents SourceFile:fs/udf/inode.c SourceLine:1036] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_prealloc_extents SourceFile:fs/udf/inode.c SourceLine:1050] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_prealloc_extents SourceFile:fs/udf/inode.c SourceLine:1059] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_prealloc_extents SourceFile:fs/udf/inode.c SourceLine:1069] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_prealloc_extents SourceFile:fs/udf/inode.c SourceLine:1081] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_prealloc_extents SourceFile:fs/udf/inode.c SourceLine:1089] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_prealloc_extents SourceFile:fs/udf/inode.c SourceLine:1094] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_prealloc_extents SourceFile:fs/udf/inode.c SourceLine:1108] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_process_sequence SourceFile:fs/udf/super.c SourceLine:1791] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_read_inode SourceFile:fs/udf/inode.c SourceLine:1534] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_read_inode SourceFile:fs/udf/inode.c SourceLine:1551] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_split_extents SourceFile:fs/udf/inode.c SourceLine:958] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_table_free_blocks SourceFile:fs/udf/balloc.c SourceLine:394] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_table_free_blocks SourceFile:fs/udf/balloc.c SourceLine:396] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_table_free_blocks SourceFile:fs/udf/balloc.c SourceLine:398] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_table_free_blocks SourceFile:fs/udf/balloc.c SourceLine:406] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_table_free_blocks SourceFile:fs/udf/balloc.c SourceLine:413] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_table_free_blocks SourceFile:fs/udf/balloc.c SourceLine:415] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_table_free_blocks SourceFile:fs/udf/balloc.c SourceLine:425] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_table_free_blocks SourceFile:fs/udf/balloc.c SourceLine:461] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_table_new_block SourceFile:fs/udf/balloc.c SourceLine:600] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_table_new_block SourceFile:fs/udf/balloc.c SourceLine:604] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_table_prealloc_blocks SourceFile:fs/udf/balloc.c SourceLine:535] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_table_prealloc_blocks SourceFile:fs/udf/balloc.c SourceLine:539] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_truncate_extents SourceFile:fs/udf/truncate.c SourceLine:204] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_truncate_extents SourceFile:fs/udf/truncate.c SourceLine:220] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_truncate_extents SourceFile:fs/udf/truncate.c SourceLine:267] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_update_inode SourceFile:fs/udf/inode.c SourceLine:1739] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ufs_read_cylinder SourceFile:fs/ufs/cylinder.c SourceLine:44] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:v9fs_fill_super SourceFile:fs/9p/vfs_super.c SourceLine:40] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:v9fs_fill_super SourceFile:fs/9p/vfs_super.c SourceLine:41] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:wnd_extend SourceFile:fs/ntfs3/bitmap.c SourceLine:1360] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:wnd_extend SourceFile:fs/ntfs3/bitmap.c SourceLine:1376] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:wnd_find SourceFile:fs/ntfs3/bitmap.c SourceLine:1131] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:wnd_is_free_hlp SourceFile:fs/ntfs3/bitmap.c SourceLine:853] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:wnd_is_used SourceFile:fs/ntfs3/bitmap.c SourceLine:923] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:wnd_map SourceFile:fs/ntfs3/bitmap.c SourceLine:707] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:wnd_map SourceFile:fs/ntfs3/bitmap.c SourceLine:716] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:wnd_rescan SourceFile:fs/ntfs3/bitmap.c SourceLine:571] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:wnd_set_free SourceFile:fs/ntfs3/bitmap.c SourceLine:731] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:wnd_set_used SourceFile:fs/ntfs3/bitmap.c SourceLine:772] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:xfs_fs_fill_super SourceFile:fs/xfs/xfs_super.c SourceLine:1865] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:xtLog SourceFile:fs/jfs/jfs_txnmgr.c SourceLine:1708] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:xtLog SourceFile:fs/jfs/jfs_txnmgr.c SourceLine:1798] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:xtLog SourceFile:fs/jfs/jfs_txnmgr.c SourceLine:1918] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:z_erofs_fill_inode SourceFile:fs/erofs/zmap.c SourceLine:658] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:z_erofs_get_extent_compressedlen SourceFile:fs/erofs/zmap.c SourceLine:346] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:z_erofs_load_lz4_config SourceFile:fs/erofs/decompressor.c SourceLine:30] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:z_erofs_map_blocks_fo SourceFile:fs/erofs/zmap.c SourceLine:469] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:zisofs_fill_pages SourceFile:fs/isofs/compress.c SourceLine:217] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:zisofs_uncompress_block SourceFile:fs/isofs/compress.c SourceLine:47] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:zonefs_init_zgroup SourceFile:fs/zonefs/super.c SourceLine:1022] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:zonefs_init_zgroup SourceFile:fs/zonefs/super.c SourceLine:1023] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:zonefs_update_stats SourceFile:fs/zonefs/super.c SourceLine:171] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:zonefs_update_stats SourceFile:fs/zonefs/super.c SourceLine:178]] TruncatedOutput:false] |
| 250/4 | 2026/03/16 16:24 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 80951 output: 46 thoughts: 226 Thoughts: **Analyzing JFS Block Size Usage** Okay, so I'm looking at how `s_blocksize_bits` is used within the JFS filesystem code. The initial scan shows that it's accessed in quite a few functions, including `copy_from_dinode`, `copy_to_dinode`, `dataLog`, `diLog`, `dtLog`, `ea_get`, `ea_read`, `ea_write`, `jfs_extendfs`, `jfs_get_block`, `jfs_ioc_trim`, `jfs_quota_read`, `jfs_quota_write`, `open_inline_log`, `readSuper`, and `xtLog`. That's a pretty wide range, suggesting it's fundamental to many operations. Let's zoom in a bit on `copy_from_dinode`. I remember that one has a line related to block numbers: ```c 3088: ip->i_blocks = LBLK2PBLK(ip->i_sb, le64_to_cpu(dip->di_nblocks)); ``` Aha, it's converting logical block numbers to physical block numbers using `LBLK2PBLK`. It seems this is where we actually are seeing its use, but it is calling a macro. I'll need to chase that down to understand the core calculation. `LBLK2PBLK` is likely where `s_blocksize_bits` is ultimately utilized, as it's the most obvious place. |
| 251/4 | 2026/03/16 16:25 | tool | codesearch-definition-source |
0mError:requested entity does not exist Args: map[ContextFile:fs/jfs/jfs_filsys.h IncludeLines:true Name:LBLK2PBLK] Results: map[SourceCode: SourceFile:] |
| 252/4 | 2026/03/16 16:25 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 81572 output: 23 thoughts: 15 Thoughts: Okay, here's my summary, presented as a stream of thought: **Defining the Unknown** Right, `LBLK2PBLK`. That term's not immediately familiar. It's likely a specific nomenclature within a particular domain. My first instinct, as always, is to establish a foundational understanding. So, I'll initiate a search for the definition of `LBLK2PBLK`. It could be an acronym, a technical term, or a proprietary name – the search will hopefully clarify its meaning and context. My approach will probably involve specific search strings to narrow down the results, focusing on potential areas like: relevant technical documentation, research papers in related fields, or industry-specific glossaries. Let's see what we uncover. |
| 253/4 | 2026/03/16 16:25 | tool | codesearch-find-references |
0mError:requested entity does not exist Args: map[Name:LBLK2PBLK] Results: map[References:<nil> TruncatedOutput:false] |
| 254/4 | 2026/03/16 16:25 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 81786 output: 27 thoughts: 38 Thoughts: Okay, here's my interpretation of those thoughts, summarized in the first person, assuming the user is an expert familiar with filesystems: **Locating That Definition** Alright, so I need to find where this thing is defined. I'm thinking it's probably buried deep in the JFS code, likely as a macro. I'd start digging around in the `fs/jfs` directory, specifically focusing on those header files. My gut tells me it's either going to be in `jfs_types.h`, that's usually where they'll stick core data structures and types, or possibly in `jfs_filsys.h` since it might be related to the filesystem itself. But definitely, I'm expecting it to be defined as a macro. That's the most likely scenario given the context. Time to pull up those files and start grepping, I guess. |
| 255/4 | 2026/03/16 16:25 | tool | read-file |
0mArgs:map[File:fs/jfs/jfs_filsys.h] Results: map[Contents:/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * Copyright (C) International Business Machines Corp., 2000-2003 */ #ifndef _H_JFS_FILSYS #define _H_JFS_FILSYS /* * jfs_filsys.h * * file system (implementation-dependent) constants * * refer to <limits.h> for system wide implementation-dependent constants */ /* * file system option (superblock flag) */ /* directory option */ #define JFS_UNICODE 0x00000001 /* unicode name */ /* mount time flags for error handling */ #define JFS_ERR_REMOUNT_RO 0x00000002 /* remount read-only */ #define JFS_ERR_CONTINUE 0x00000004 /* continue */ #define JFS_ERR_PANIC 0x00000008 /* panic */ #define JFS_ERR_MASK (JFS_ERR_REMOUNT_RO|JFS_ERR_CONTINUE|JFS_ERR_PANIC) /* Quota support */ #define JFS_USRQUOTA 0x00000010 #define JFS_GRPQUOTA 0x00000020 /* mount time flag to disable journaling to disk */ #define JFS_NOINTEGRITY 0x00000040 /* mount time flag to enable TRIM to ssd disks */ #define JFS_DISCARD 0x00000080 /* commit option */ #define JFS_COMMIT 0x00000f00 /* commit option mask */ #define JFS_GROUPCOMMIT 0x00000100 /* group (of 1) commit */ #define JFS_LAZYCOMMIT 0x00000200 /* lazy commit */ #define JFS_TMPFS 0x00000400 /* temporary file system - * do not log/commit: * Never implemented */ /* log logical volume option */ #define JFS_INLINELOG 0x00000800 /* inline log within file system */ #define JFS_INLINEMOVE 0x00001000 /* inline log being moved */ /* Secondary aggregate inode table */ #define JFS_BAD_SAIT 0x00010000 /* current secondary ait is bad */ /* sparse regular file support */ #define JFS_SPARSE 0x00020000 /* sparse regular file */ /* DASD Limits F226941 */ #define JFS_DASD_ENABLED 0x00040000 /* DASD limits enabled */ #define JFS_DASD_PRIME 0x00080000 /* Prime DASD usage on boot */ /* big endian flag */ #define JFS_SWAP_BYTES 0x00100000 /* running on big endian computer */ /* Directory index */ #define JFS_DIR_INDEX 0x00200000 /* Persistent index for */ /* platform options */ #define JFS_LINUX 0x10000000 /* Linux support */ #define JFS_DFS 0x20000000 /* DCE DFS LFS support */ /* Never implemented */ #define JFS_OS2 0x40000000 /* OS/2 support */ /* case-insensitive name/directory support */ #define JFS_AIX 0x80000000 /* AIX support */ /* * buffer cache configuration */ /* page size */ #ifdef PSIZE #undef PSIZE #endif #define PSIZE 4096 /* page size (in byte) */ #define L2PSIZE 12 /* log2(PSIZE) */ #define POFFSET 4095 /* offset within page */ /* buffer page size */ #define BPSIZE PSIZE /* * fs fundamental size * * PSIZE >= file system block size >= PBSIZE >= DISIZE */ #define PBSIZE 512 /* physical block size (in byte) */ #define L2PBSIZE 9 /* log2(PBSIZE) */ #define DISIZE 512 /* on-disk inode size (in byte) */ #define L2DISIZE 9 /* log2(DISIZE) */ #define IDATASIZE 256 /* inode inline data size */ #define IXATTRSIZE 128 /* inode inline extended attribute size */ #define XTPAGE_SIZE 4096 #define log2_PAGESIZE 12 #define IAG_SIZE 4096 #define IAG_EXTENT_SIZE 4096 #define INOSPERIAG 4096 /* number of disk inodes per iag */ #define L2INOSPERIAG 12 /* l2 number of disk inodes per iag */ #define INOSPEREXT 32 /* number of disk inode per extent */ #define L2INOSPEREXT 5 /* l2 number of disk inode per extent */ #define IXSIZE (DISIZE * INOSPEREXT) /* inode extent size */ #define INOSPERPAGE 8 /* number of disk inodes per 4K page */ #define L2INOSPERPAGE 3 /* log2(INOSPERPAGE) */ #define IAGFREELIST_LWM 64 #define INODE_EXTENT_SIZE IXSIZE /* inode extent size */ #define NUM_INODE_PER_EXTENT INOSPEREXT #define NUM_INODE_PER_IAG INOSPERIAG #define MINBLOCKSIZE 512 #define L2MINBLOCKSIZE 9 #define MAXBLOCKSIZE 4096 #define L2MAXBLOCKSIZE 12 #define MAXFILESIZE ((s64)1 << 52) #define JFS_LINK_MAX 0xffffffff /* Minimum number of bytes supported for a JFS partition */ #define MINJFS (0x1000000) #define MINJFSTEXT "16" /* * file system block size -> physical block size */ #define LBOFFSET(x) ((x) & (PBSIZE - 1)) #define LBNUMBER(x) ((x) >> L2PBSIZE) #define LBLK2PBLK(sb,b) ((b) << (sb->s_blocksize_bits - L2PBSIZE)) #define PBLK2LBLK(sb,b) ((b) >> (sb->s_blocksize_bits - L2PBSIZE)) /* size in byte -> last page number */ #define SIZE2PN(size) ( ((s64)((size) - 1)) >> (L2PSIZE) ) /* size in byte -> last file system block number */ #define SIZE2BN(size, l2bsize) ( ((s64)((size) - 1)) >> (l2bsize) ) /* * fixed physical block address (physical block size = 512 byte) * * NOTE: since we can't guarantee a physical block size of 512 bytes the use of * these macros should be removed and the byte offset macros used instead. */ #define SUPER1_B 64 /* primary superblock */ #define AIMAP_B (SUPER1_B + 8) /* 1st extent of aggregate inode map */ #define AITBL_B (AIMAP_B + 16) /* * 1st extent of aggregate inode table */ #define SUPER2_B (AITBL_B + 32) /* 2ndary superblock pbn */ #define BMAP_B (SUPER2_B + 8) /* block allocation map */ /* * SIZE_OF_SUPER defines the total amount of space reserved on disk for the * superblock. This is not the same as the superblock structure, since all of * this space is not currently being used. */ #define SIZE_OF_SUPER PSIZE /* * SIZE_OF_AG_TABLE defines the amount of space reserved to hold the AG table */ #define SIZE_OF_AG_TABLE PSIZE /* * SIZE_OF_MAP_PAGE defines the amount of disk space reserved for each page of * the inode allocation map (to hold iag) */ #define SIZE_OF_MAP_PAGE PSIZE /* * fixed byte offset address */ #define SUPER1_OFF 0x8000 /* primary superblock */ #define AIMAP_OFF (SUPER1_OFF + SIZE_OF_SUPER) /* * Control page of aggregate inode map * followed by 1st extent of map */ #define AITBL_OFF (AIMAP_OFF + (SIZE_OF_MAP_PAGE << 1)) /* * 1st extent of aggregate inode table */ #define SUPER2_OFF (AITBL_OFF + INODE_EXTENT_SIZE) /* * secondary superblock */ #define BMAP_OFF (SUPER2_OFF + SIZE_OF_SUPER) /* * block allocation map */ /* * The following macro is used to indicate the number of reserved disk blocks at * the front of an aggregate, in terms of physical blocks. This value is * currently defined to be 32K. This turns out to be the same as the primary * superblock's address, since it directly follows the reserved blocks. */ #define AGGR_RSVD_BLOCKS SUPER1_B /* * The following macro is used to indicate the number of reserved bytes at the * front of an aggregate. This value is currently defined to be 32K. This * turns out to be the same as the primary superblock's byte offset, since it * directly follows the reserved blocks. */ #define AGGR_RSVD_BYTES SUPER1_OFF /* * The following macro defines the byte offset for the first inode extent in * the aggregate inode table. This allows us to find the self inode to find the * rest of the table. Currently this value is 44K. */ #define AGGR_INODE_TABLE_START AITBL_OFF /* * fixed reserved inode number */ /* aggregate inode */ #define AGGR_RESERVED_I 0 /* aggregate inode (reserved) */ #define AGGREGATE_I 1 /* aggregate inode map inode */ #define BMAP_I 2 /* aggregate block allocation map inode */ #define LOG_I 3 /* aggregate inline log inode */ #define BADBLOCK_I 4 /* aggregate bad block inode */ #define FILESYSTEM_I 16 /* 1st/only fileset inode in ait: * fileset inode map inode */ /* per fileset inode */ #define FILESET_RSVD_I 0 /* fileset inode (reserved) */ #define FILESET_EXT_I 1 /* fileset inode extension */ #define ROOT_I 2 /* fileset root inode */ #define ACL_I 3 /* fileset ACL inode */ #define FILESET_OBJECT_I 4 /* the first fileset inode available for a file * or directory or link... */ #define FIRST_FILESET_INO 16 /* the first aggregate inode which describes * an inode. (To fsck this is also the first * inode in part 2 of the agg inode table.) */ /* * directory configuration */ #define JFS_NAME_MAX 255 #define JFS_PATH_MAX BPSIZE /* * file system state (superblock state) */ #define FM_CLEAN 0x00000000 /* file system is unmounted and clean */ #define FM_MOUNT 0x00000001 /* file system is mounted cleanly */ #define FM_DIRTY 0x00000002 /* file system was not unmounted and clean * when mounted or * commit failure occurred while being mounted: * fsck() must be run to repair */ #define FM_LOGREDO 0x00000004 /* log based recovery (logredo()) failed: * fsck() must be run to repair */ #define FM_EXTENDFS 0x00000008 /* file system extendfs() in progress */ #define FM_STATE_MAX 0x0000000f /* max value of s_state */ #endif /* _H_JFS_FILSYS */ ] |
| 256/4 | 2026/03/16 16:25 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 85188 output: 43 thoughts: 338 Thoughts: **Analyzing Block Size in JFS** Okay, so I'm looking at this macro, `LBLK2PBLK`, which seems to be doing a bit shift based on block sizes. The `L2PBSIZE` is defined as 9, and if the superblock's `s_blocksize_bits` is 12, then the shift is 3. That's straightforward enough. Now, circling back to the original question: is the superblock's block size always equal to the inode's `i_blkbits` in JFS? I know from my understanding of the JFS codebase that `sb->s_blocksize_bits` is initialized to 12 (for 4K blocks) within `jfs_fill_super`. More importantly, it seems that value is never altered. I can say, with confidence, that the inodes are initialized with `i_blkbits` set to the same value. JFS, from what I've observed, doesn't appear to modify `i_blkbits` after inode initialization. Just to be absolutely certain, I did a quick code search for writes to `inode::i_blkbits` within the `fs/jfs/` directory, and that yielded nothing. However, before I pat myself on the back, I should double check `jfs_get_block`. I see there are two variables `inode->i_blkbits` and `inode->i_sb->s_blocksize_bits` are both used. I should analyze the code for that case. |
| 257/4 | 2026/03/16 16:25 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/inode.c IncludeLines:true Name:jfs_get_block] Results: map[SourceCode: 205: int jfs_get_block(struct inode *ip, sector_t lblock,
206: struct buffer_head *bh_result, int create)
207: {
208: s64 lblock64 = lblock;
209: int rc = 0;
210: xad_t xad;
211: s64 xaddr;
212: int xflag;
213: s32 xlen = bh_result->b_size >> ip->i_blkbits;
214:
215: /*
216: * Take appropriate lock on inode
217: */
218: if (create)
219: IWRITE_LOCK(ip, RDWRLOCK_NORMAL);
220: else
221: IREAD_LOCK(ip, RDWRLOCK_NORMAL);
222:
223: if (((lblock64 << ip->i_sb->s_blocksize_bits) < ip->i_size) &&
224: (!xtLookup(ip, lblock64, xlen, &xflag, &xaddr, &xlen, 0)) &&
225: xaddr) {
226: if (xflag & XAD_NOTRECORDED) {
227: if (!create)
228: /*
229: * Allocated but not recorded, read treats
230: * this as a hole
231: */
232: goto unlock;
233: XADoffset(&xad, lblock64);
234: XADlength(&xad, xlen);
235: XADaddress(&xad, xaddr);
236: rc = extRecord(ip, &xad);
237: if (rc)
238: goto unlock;
239: set_buffer_new(bh_result);
240: }
241:
242: map_bh(bh_result, ip->i_sb, xaddr);
243: bh_result->b_size = xlen << ip->i_blkbits;
244: goto unlock;
245: }
246: if (!create)
247: goto unlock;
248:
249: /*
250: * Allocate a new block
251: */
252: if ((rc = extHint(ip, lblock64 << ip->i_sb->s_blocksize_bits, &xad)))
253: goto unlock;
254: rc = extAlloc(ip, xlen, lblock64, &xad, false);
255: if (rc)
256: goto unlock;
257:
258: set_buffer_new(bh_result);
259: map_bh(bh_result, ip->i_sb, addressXAD(&xad));
260: bh_result->b_size = lengthXAD(&xad) << ip->i_blkbits;
261:
262: unlock:
263: /*
264: * Release lock on inode
265: */
266: if (create)
267: IWRITE_UNLOCK(ip);
268: else
269: IREAD_UNLOCK(ip);
270: return rc;
271: }
SourceFile:fs/jfs/inode.c] |
| 258/4 | 2026/03/16 16:25 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 86891 output: 22 thoughts: 268 Thoughts: **Analyzing Potential Block Size Inconsistencies in JFS** Okay, so I'm looking at `jfs_get_block` and immediately something catches my attention. Line 213, specifically the calculation of `xlen`, uses `ip->i_blkbits`, which seems to represent the block size *within* the inode. Then down at line 223, it uses a left-shift with `ip->i_sb->s_blocksize_bits`. And again on line 243, we shift using `ip->i_blkbits`, but in line 260 we calculate using `lengthXAD(&xad) << ip->i_blkbits`. If those two `*_bits` values, the block size *internal to the inode* versus the *blocksize reported by the superblock*, were ever mismatched, this whole thing would fall apart. The calculations would produce incorrect block offsets, and we'd be in a world of hurt. It would also be confusing from a code maintainability perspective. I really should verify whether these two can *actually* diverge. I'll need to check if `i_blkbits` is ever *changed* anywhere in the `fs/jfs/` directory. That's the key. I need to know if there's any code that modifies this crucial value. Let me do a quick search within that directory for `i_blkbits =`. |
| 259/4 | 2026/03/16 16:25 | tool | codesearch-find-references |
0mError:requested entity does not exist Args: map[Name:i_blkbits] Results: map[References:<nil> TruncatedOutput:false] |
| 260/4 | 2026/03/16 16:25 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 87506 output: 24 thoughts: 80 Thoughts: Okay, here's my thought process on this, distilled into a first-person summary: **Initial Investigation into `i_blkbits` in the JFS Filesystem** Alright, so the standard `codesearch-dir-index` is out of the question for this, so I need to find another way to locate where the variable `i_blkbits` is used. I could technically just use a regular string search. I realize that I'm interested in the JFS filesystem, so instead of searching the entire codebase I'll just check all the files in `fs/jfs` to see if `i_blkbits` is present. My next step will be to search for `i_blkbits` within the `fs/jfs/` directory using `grep`. |
| 261/4 | 2026/03/16 16:25 | tool | codesearch-find-references |
0mArgs:map[Name:inode::i_blkbits] Results: map[References:[map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__blockdev_direct_IO SourceFile:fs/direct-io.c SourceLine:1110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__blockdev_direct_IO SourceFile:fs/direct-io.c SourceLine:1110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__blockdev_direct_IO SourceFile:fs/direct-io.c SourceLine:1110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__blockdev_direct_IO SourceFile:fs/direct-io.c SourceLine:1110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__blockdev_direct_IO SourceFile:fs/direct-io.c SourceLine:1110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__blockdev_direct_IO SourceFile:fs/direct-io.c SourceLine:1110] map[ReferenceKind:takes-address-of ReferencingEntityKind:function ReferencingEntityName:__blockdev_direct_IO SourceFile:fs/direct-io.c SourceLine:1110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__ext4_block_zero_page_range SourceFile:fs/ext4/inode.c SourceLine:4023] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__ext4_find_entry SourceFile:fs/ext4/namei.c SourceLine:1633] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__ext4_read_dirblock SourceFile:fs/ext4/namei.c SourceLine:134] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__find_get_block_slow SourceFile:fs/buffer.c SourceLine:184] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__get_metapage SourceFile:fs/jfs/jfs_metapage.c SourceLine:698] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__get_metapage SourceFile:fs/jfs/jfs_metapage.c SourceLine:716] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_fallocate SourceFile:fs/gfs2/file.c SourceLine:1335] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_alloc SourceFile:fs/gfs2/bmap.c SourceLine:669] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_alloc SourceFile:fs/gfs2/bmap.c SourceLine:765] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_alloc SourceFile:fs/gfs2/bmap.c SourceLine:774] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_get SourceFile:fs/gfs2/bmap.c SourceLine:886] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_get SourceFile:fs/gfs2/bmap.c SourceLine:894] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_get SourceFile:fs/gfs2/bmap.c SourceLine:895] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_get SourceFile:fs/gfs2/bmap.c SourceLine:896] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_get SourceFile:fs/gfs2/bmap.c SourceLine:898] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_get SourceFile:fs/gfs2/bmap.c SourceLine:921] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_get SourceFile:fs/gfs2/bmap.c SourceLine:922] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__gfs2_iomap_get SourceFile:fs/gfs2/bmap.c SourceLine:949] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:__invalidate_metapages SourceFile:fs/jfs/jfs_metapage.c SourceLine:901] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:_ext4_get_block SourceFile:fs/ext4/inode.c SourceLine:907] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:bh_get SourceFile:fs/gfs2/quota.c SourceLine:415] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:bh_get SourceFile:fs/gfs2/quota.c SourceLine:423] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_size SourceFile:block/bdev.c SourceLine:1371] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:block_truncate_page SourceFile:fs/buffer.c SourceLine:2671] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:ceph_fill_inode SourceFile:fs/ceph/inode.c SourceLine:1107] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:ceph_fill_inode SourceFile:fs/ceph/inode.c SourceLine:1109] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:ceph_fill_inode SourceFile:fs/ceph/inode.c SourceLine:1112] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:ceph_fill_inode SourceFile:fs/ceph/inode.c SourceLine:1216] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:ceph_new_inode SourceFile:fs/ceph/inode.c SourceLine:127] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:cifs_alloc_inode SourceFile:fs/smb/client/cifsfs.c SourceLine:470] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:clean_bdev_aliases SourceFile:fs/buffer.c SourceLine:1738] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dax_writeback_mapping_range SourceFile:fs/dax.c SourceLine:1209] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:do_mpage_readpage SourceFile:fs/mpage.c SourceLine:155] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:do_perf_trace_gfs2_iomap_end SourceFile:fs/gfs2/trace_gfs2.h SourceLine:511] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:do_trace_event_raw_event_gfs2_iomap_end SourceFile:fs/gfs2/trace_gfs2.h SourceLine:511] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_alloc_block SourceFile:include/linux/quotaops.h SourceLine:336] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_alloc_block_nodirty SourceFile:include/linux/quotaops.h SourceLine:326] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_alloc_block_nofail SourceFile:include/linux/quotaops.h SourceLine:331] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_claim_block SourceFile:include/linux/quotaops.h SourceLine:362] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_free_block SourceFile:include/linux/quotaops.h SourceLine:390] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_free_block_nodirty SourceFile:include/linux/quotaops.h SourceLine:385] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_prealloc_block_nodirty SourceFile:include/linux/quotaops.h SourceLine:341] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_reclaim_block SourceFile:include/linux/quotaops.h SourceLine:368] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_release_reservation_block SourceFile:include/linux/quotaops.h SourceLine:396] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:dquot_reserve_block SourceFile:include/linux/quotaops.h SourceLine:356] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_find_target_block SourceFile:fs/erofs/namei.c SourceLine:93] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_get_aops SourceFile:fs/erofs/internal.h SourceLine:480] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:erofs_map_blocks SourceFile:fs/erofs/data.c SourceLine:110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:exfat_get_block SourceFile:fs/exfat/inode.c SourceLine:253] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_alloc_file_blocks SourceFile:fs/ext4/extents.c SourceLine:4557] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_alloc_file_blocks SourceFile:fs/ext4/extents.c SourceLine:4618] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_block_write_begin SourceFile:fs/ext4/inode.c SourceLine:1179] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_collapse_range SourceFile:fs/ext4/extents.c SourceLine:5539] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_collapse_range SourceFile:fs/ext4/extents.c SourceLine:5540] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_convert_meta_bg SourceFile:fs/ext4/resize.c SourceLine:1928] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_convert_unwritten_extents SourceFile:fs/ext4/extents.c SourceLine:5001] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_convert_unwritten_extents_atomic SourceFile:fs/ext4/extents.c SourceLine:4927] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_da_should_update_i_disksize SourceFile:fs/ext4/inode.c SourceLine:3182] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_do_fallocate SourceFile:fs/ext4/extents.c SourceLine:4778] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_do_fallocate SourceFile:fs/ext4/extents.c SourceLine:4779] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_do_fallocate SourceFile:fs/ext4/extents.c SourceLine:4779] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_do_fallocate SourceFile:fs/ext4/extents.c SourceLine:4779] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_do_fallocate SourceFile:fs/ext4/extents.c SourceLine:4779] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_dx_find_entry SourceFile:fs/ext4/namei.c SourceLine:1731] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_end_bio SourceFile:fs/ext4/page-io.c SourceLine:369] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_get_verity_descriptor_location SourceFile:fs/ext4/verity.c SourceLine:307] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_init_orphan_info SourceFile:fs/ext4/orphan.c SourceLine:593] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_inode_blocks SourceFile:fs/ext4/inode.c SourceLine:5004] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_inode_blocks_set SourceFile:fs/ext4/inode.c SourceLine:4642] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_insert_range SourceFile:fs/ext4/extents.c SourceLine:5642] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_insert_range SourceFile:fs/ext4/extents.c SourceLine:5643] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_iomap_alloc SourceFile:fs/ext4/inode.c SourceLine:3705] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_iomap_begin SourceFile:fs/ext4/inode.c SourceLine:3753] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_iomap_begin_report SourceFile:fs/ext4/inode.c SourceLine:3837] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_map_blocks SourceFile:fs/ext4/inode.c SourceLine:832] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_map_blocks SourceFile:fs/ext4/inode.c SourceLine:833] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_get_buddy_folio_lock SourceFile:fs/ext4/mballoc.c SourceLine:1556] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_get_buddy_folio_lock SourceFile:fs/ext4/mballoc.c SourceLine:1565] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_get_buddy_folio_lock SourceFile:fs/ext4/mballoc.c SourceLine:1568] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_init_cache SourceFile:fs/ext4/mballoc.c SourceLine:1409] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_init_cache SourceFile:fs/ext4/mballoc.c SourceLine:1448] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_load_buddy_gfp SourceFile:fs/ext4/mballoc.c SourceLine:1714] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_load_buddy_gfp SourceFile:fs/ext4/mballoc.c SourceLine:1747] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_load_buddy_gfp SourceFile:fs/ext4/mballoc.c SourceLine:1765] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_load_buddy_gfp SourceFile:fs/ext4/mballoc.c SourceLine:1768] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mb_load_buddy_gfp SourceFile:fs/ext4/mballoc.c SourceLine:1814] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mpage_readpages SourceFile:fs/ext4/readpage.c SourceLine:216] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mpage_readpages SourceFile:fs/ext4/readpage.c SourceLine:252] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_mpage_readpages SourceFile:fs/ext4/readpage.c SourceLine:253] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_overwrite_io SourceFile:fs/ext4/file.c SourceLine:222] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_punch_hole SourceFile:fs/ext4/inode.c SourceLine:4397] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_punch_hole SourceFile:fs/ext4/inode.c SourceLine:4398] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_readdir SourceFile:fs/ext4/dir.c SourceLine:196] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_set_iomap SourceFile:fs/ext4/inode.c SourceLine:3449] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_set_iomap SourceFile:fs/ext4/inode.c SourceLine:3472] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_set_iomap SourceFile:fs/ext4/inode.c SourceLine:3473] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_xattr_inode_read SourceFile:fs/ext4/xattr.c SourceLine:385] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_xattr_inode_read SourceFile:fs/ext4/xattr.c SourceLine:386] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_xattr_inode_write SourceFile:fs/ext4/xattr.c SourceLine:1391] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_zero_range SourceFile:fs/ext4/extents.c SourceLine:4673] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ext4_zero_range SourceFile:fs/ext4/extents.c SourceLine:4713] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fallocate_chunk SourceFile:fs/gfs2/file.c SourceLine:1206] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fallocate_chunk SourceFile:fs/gfs2/file.c SourceLine:1207] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fat_get_block SourceFile:fs/fat/inode.c SourceLine:187] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fat_get_block_bmap SourceFile:fs/fat/inode.c SourceLine:295] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:filemap_range_uptodate SourceFile:mm/filemap.c SourceLine:2528] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:folio_create_buffers SourceFile:fs/buffer.c SourceLine:1800] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:folio_create_buffers SourceFile:fs/buffer.c SourceLine:1800] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:folio_create_buffers SourceFile:fs/buffer.c SourceLine:1800] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:folio_create_buffers SourceFile:fs/buffer.c SourceLine:1800] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:folio_create_buffers SourceFile:fs/buffer.c SourceLine:1800] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:folio_create_buffers SourceFile:fs/buffer.c SourceLine:1800] map[ReferenceKind:takes-address-of ReferencingEntityKind:function ReferencingEntityName:folio_create_buffers SourceFile:fs/buffer.c SourceLine:1800] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_policy_du_bits SourceFile:fs/crypto/fscrypt_private.h SourceLine:218] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_policy_v2_du_bits SourceFile:fs/crypto/fscrypt_private.h SourceLine:209] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_prepare_new_inode SourceFile:fs/crypto/keysetup.c SourceLine:746] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_setup_encryption_info SourceFile:fs/crypto/keysetup.c SourceLine:613] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_supported_v2_policy SourceFile:fs/crypto/policy.c SourceLine:255] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_supported_v2_policy SourceFile:fs/crypto/policy.c SourceLine:262] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_zeroout_range SourceFile:fs/crypto/bio.c SourceLine:142] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_zeroout_range SourceFile:fs/crypto/bio.c SourceLine:143] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_zeroout_range SourceFile:fs/crypto/bio.c SourceLine:144] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fscrypt_zeroout_range_inline_crypt SourceFile:fs/crypto/bio.c SourceLine:76] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:fsstack_copy_attr_all SourceFile:fs/stack.c SourceLine:72] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fsstack_copy_attr_all SourceFile:fs/stack.c SourceLine:72] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fsverity_init_merkle_tree_params SourceFile:fs/verity/open.c SourceLine:84] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fuse_fill_attr_from_inode SourceFile:fs/fuse/inode.c SourceLine:1698] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:generic_swapfile_activate SourceFile:mm/page_io.c SourceLine:94] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_add_inode_blocks SourceFile:fs/gfs2/inode.h SourceLine:57] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_alloc_extent SourceFile:fs/gfs2/bmap.c SourceLine:1290] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_block_map SourceFile:fs/gfs2/bmap.c SourceLine:1232] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_block_map SourceFile:fs/gfs2/bmap.c SourceLine:1254] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_check_internal_file_size SourceFile:fs/gfs2/inode.h SourceLine:81] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_get_extent SourceFile:fs/gfs2/bmap.c SourceLine:1269] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_get_inode_blocks SourceFile:fs/gfs2/inode.h SourceLine:52] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_hole_size SourceFile:fs/gfs2/bmap.c SourceLine:600] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_iomap_begin_write SourceFile:fs/gfs2/bmap.c SourceLine:1054] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_iomap_get_folio SourceFile:fs/gfs2/bmap.c SourceLine:976] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_map_journal_extents SourceFile:fs/gfs2/bmap.c SourceLine:2287] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_set_inode_blocks SourceFile:fs/gfs2/inode.h SourceLine:47] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_unstuffer_folio SourceFile:fs/gfs2/bmap.c SourceLine:75] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_write_alloc_required SourceFile:fs/gfs2/bmap.c SourceLine:2350] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:gfs2_write_jdata_batch SourceFile:fs/gfs2/aops.c SourceLine:205] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:hpfs_iomap_begin SourceFile:fs/hpfs/file.c SourceLine:126] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:htree_dirblock_to_tree SourceFile:fs/ext4/namei.c SourceLine:1077] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:i_blocks_per_folio SourceFile:include/linux/pagemap.h SourceLine:1592] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:i_blocksize SourceFile:include/linux/fs.h SourceLine:979] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_clear_range_dirty SourceFile:fs/iomap/buffered-io.c SourceLine:179] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_clear_range_dirty SourceFile:fs/iomap/buffered-io.c SourceLine:180] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_find_dirty_range SourceFile:fs/iomap/buffered-io.c SourceLine:142] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_find_dirty_range SourceFile:fs/iomap/buffered-io.c SourceLine:143] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_find_dirty_range SourceFile:fs/iomap/buffered-io.c SourceLine:143] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_find_dirty_range SourceFile:fs/iomap/buffered-io.c SourceLine:157] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_find_dirty_range SourceFile:fs/iomap/buffered-io.c SourceLine:158] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_set_range_dirty SourceFile:fs/iomap/buffered-io.c SourceLine:202] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_set_range_dirty SourceFile:fs/iomap/buffered-io.c SourceLine:203] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_set_range_uptodate SourceFile:fs/iomap/buffered-io.c SourceLine:70] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ifs_set_range_uptodate SourceFile:fs/iomap/buffered-io.c SourceLine:71] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:inode_getblk SourceFile:fs/udf/inode.c SourceLine:842] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:inode_init_always_gfp SourceFile:fs/inode.c SourceLine:235] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_adjust_read_range SourceFile:fs/iomap/buffered-io.c SourceLine:293] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_bmap SourceFile:fs/iomap/fiemap.c SourceLine:99] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_bmap SourceFile:fs/iomap/fiemap.c SourceLine:103] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_dio_bio_iter_one SourceFile:fs/iomap/direct-io.c SourceLine:346] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_dio_zero SourceFile:fs/iomap/direct-io.c SourceLine:314] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_is_partially_uptodate SourceFile:fs/iomap/buffered-io.c SourceLine:694] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_is_partially_uptodate SourceFile:fs/iomap/buffered-io.c SourceLine:695] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_to_bh SourceFile:fs/buffer.c SourceLine:2043] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_to_bh SourceFile:fs/buffer.c SourceLine:2098] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:iomap_write_delalloc_ifs_punch SourceFile:fs/iomap/buffered-io.c SourceLine:1236] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_get_block SourceFile:fs/jfs/inode.c SourceLine:213] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_get_block SourceFile:fs/jfs/inode.c SourceLine:243] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:jfs_get_block SourceFile:fs/jfs/inode.c SourceLine:260] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:make_empty_dir_inode SourceFile:fs/libfs.c SourceLine:1781] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:map_buffer_to_folio SourceFile:fs/mpage.c SourceLine:110] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_get_blocks SourceFile:fs/jfs/jfs_metapage.c SourceLine:330] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_read_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:577] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_read_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:594] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_read_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:597] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_read_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:598] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_write_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:444] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_write_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:468] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_write_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:475] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_write_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:492] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_write_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:506] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:metapage_write_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:512] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mext_check_adjust_range SourceFile:fs/ext4/move_extent.c SourceLine:524] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mext_check_adjust_range SourceFile:fs/ext4/move_extent.c SourceLine:525] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mext_check_adjust_range SourceFile:fs/ext4/move_extent.c SourceLine:542] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mext_check_adjust_range SourceFile:fs/ext4/move_extent.c SourceLine:543] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mext_folio_mkuptodate SourceFile:fs/ext4/move_extent.c SourceLine:137] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mext_folio_mkwrite SourceFile:fs/ext4/move_extent.c SourceLine:276] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mext_move_begin SourceFile:fs/ext4/move_extent.c SourceLine:202] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mext_move_extent SourceFile:fs/ext4/move_extent.c SourceLine:306] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:move_dirty_folio_in_page_array SourceFile:fs/ceph/addr.c SourceLine:1265] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_map_and_submit_buffers SourceFile:fs/ext4/inode.c SourceLine:2300] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_map_and_submit_buffers SourceFile:fs/ext4/inode.c SourceLine:2301] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_map_and_submit_buffers SourceFile:fs/ext4/inode.c SourceLine:2312] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_map_and_submit_extent SourceFile:fs/ext4/inode.c SourceLine:2465] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_prepare_extent_to_map SourceFile:fs/ext4/inode.c SourceLine:2650] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_prepare_extent_to_map SourceFile:fs/ext4/inode.c SourceLine:2728] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_process_folio SourceFile:fs/ext4/inode.c SourceLine:2253] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_process_page_bufs SourceFile:fs/ext4/inode.c SourceLine:2170] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_release_unused_pages SourceFile:fs/ext4/inode.c SourceLine:1721] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_release_unused_pages SourceFile:fs/ext4/inode.c SourceLine:1722] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_submit_partial_folio SourceFile:fs/ext4/inode.c SourceLine:2408] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:mpage_write_folio SourceFile:fs/mpage.c SourceLine:460] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_bmap_data_get_key SourceFile:fs/nilfs2/bmap.c SourceLine:437] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_btnode_prepare_change_key SourceFile:fs/nilfs2/btnode.c SourceLine:223] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_dirty_folio SourceFile:fs/nilfs2/inode.c SourceLine:202] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_fiemap SourceFile:fs/nilfs2/inode.c SourceLine:1112] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_find_uncommitted_extent SourceFile:fs/nilfs2/page.c SourceLine:519] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_find_uncommitted_extent SourceFile:fs/nilfs2/page.c SourceLine:538] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_get_block SourceFile:fs/nilfs2/inode.c SourceLine:81] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_get_block SourceFile:fs/nilfs2/inode.c SourceLine:89] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_grab_buffer SourceFile:fs/nilfs2/page.c SourceLine:51] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_mdt_forget_block SourceFile:fs/nilfs2/mdt.c SourceLine:356] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_mdt_forget_block SourceFile:fs/nilfs2/mdt.c SourceLine:371] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_mdt_freeze_buffer SourceFile:fs/nilfs2/mdt.c SourceLine:573] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_mdt_get_frozen_buffer SourceFile:fs/nilfs2/mdt.c SourceLine:616] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_page_mkwrite SourceFile:fs/nilfs2/file.c SourceLine:107] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_palloc_entries_per_group SourceFile:fs/nilfs2/alloc.h SourceLine:30] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_palloc_groups_count SourceFile:fs/nilfs2/alloc.c SourceLine:43] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nilfs_recover_dsync_blocks SourceFile:fs/nilfs2/recovery.c SourceLine:543] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:nvmet_file_ns_enable SourceFile:drivers/nvme/target/io-cmd-file.c SourceLine:56] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_get_block SourceFile:fs/ocfs2/aops.c SourceLine:132] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_get_block SourceFile:fs/ocfs2/aops.c SourceLine:181] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ocfs2_write_zero_page SourceFile:fs/ocfs2/file.c SourceLine:768] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:omfs_get_block SourceFile:fs/omfs/file.c SourceLine:229] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:omfs_get_block SourceFile:fs/omfs/file.c SourceLine:259] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:orangefs_inode_getattr SourceFile:fs/orangefs/orangefs-utils.c SourceLine:320] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:read_file_page SourceFile:drivers/md/md-bitmap.c SourceLine:573] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:rock_continue SourceFile:fs/isofs/rock.c SourceLine:88] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:round_up_cluster SourceFile:fs/ext4/xattr.c SourceLine:909] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:set_blocksize SourceFile:block/bdev.c SourceLine:192] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:set_blocksize SourceFile:block/bdev.c SourceLine:209] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:set_init_blocksize SourceFile:block/bdev.c SourceLine:150] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_copy_fi SourceFile:fs/udf/directory.c SourceLine:70] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_extend_file SourceFile:fs/udf/inode.c SourceLine:683] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_extend_file SourceFile:fs/udf/inode.c SourceLine:702] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_add_entry SourceFile:fs/udf/namei.c SourceLine:226] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_add_entry SourceFile:fs/udf/namei.c SourceLine:275] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_add_entry SourceFile:fs/udf/namei.c SourceLine:293] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_advance SourceFile:fs/udf/directory.c SourceLine:293] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_advance_blk SourceFile:fs/udf/directory.c SourceLine:173] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_advance_blk SourceFile:fs/udf/directory.c SourceLine:173] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_append_blk SourceFile:fs/udf/directory.c SourceLine:462] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_append_blk SourceFile:fs/udf/directory.c SourceLine:476] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_init SourceFile:fs/udf/directory.c SourceLine:269] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_load_bhs SourceFile:fs/udf/directory.c SourceLine:196] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_fiiter_write_fi SourceFile:fs/udf/directory.c SourceLine:412] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_map_block SourceFile:fs/udf/inode.c SourceLine:451] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_readahead_dir SourceFile:fs/udf/directory.c SourceLine:127] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_readahead_dir SourceFile:fs/udf/directory.c SourceLine:135] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_readahead_dir SourceFile:fs/udf/directory.c SourceLine:136] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:udf_verify_fi SourceFile:fs/udf/directory.c SourceLine:42] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ufs_alloc_lastblock SourceFile:fs/ufs/inode.c SourceLine:1054] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ufs_alloc_lastblock SourceFile:fs/ufs/inode.c SourceLine:1060] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ufs_change_blocknr SourceFile:fs/ufs/balloc.c SourceLine:240] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ufs_change_blocknr SourceFile:fs/ufs/balloc.c SourceLine:256] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ufs_change_blocknr SourceFile:fs/ufs/balloc.c SourceLine:258] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:ufs_get_locked_folio SourceFile:fs/ufs/util.c SourceLine:221] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:z_erofs_load_compact_lcluster SourceFile:fs/erofs/zmap.c SourceLine:104]] TruncatedOutput:false] |
| 262/4 | 2026/03/16 16:25 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 102347 output: 49 thoughts: 1956 Thoughts: **Analysis of JFS Block Size Consistency** Alright, after reviewing the JFS code, here's what I've concluded. I started by tracing how block size information is handled and have arrived at a pretty firm understanding. 1. During mount, `sb->s_blocksize_bits` is consistently set to 12, representing a 4K block size, and this value never changes for the superblock. 2. Every inode created, even special ones, has its `i_blkbits` initialized from `sb->s_blocksize_bits`. 3. I couldn't find any instances where `i_blkbits` is modified after this initialization. This implies a fixed block size at the inode level as well. 4. Metapages are always associated with a JFS inode. 5. `mp->sb` is set to the `i_sb` of the inode used to get the metapage. 6. Critically, `mp->folio->mapping->host` is the same inode (or another JFS inode, but with the same super block). Therefore, based on these observations, it should be the case that `mp->sb->s_blocksize_bits` is always equal to `folio->mapping->host->i_blkbits`. The code heavily relies on this relationship. I thought about potential edge cases: * Could `folio->mapping->host` ever be NULL? No, based on how metapages are managed, `folio->mapping` can be NULL if the folio is being evicted but JFS metapages hold a reference to the folio in `__get_metapage`. Therefore, `folio->mapping->host` is always valid as `folio` comes from the `mapping` * Could `mp->sb` be different from `folio->mapping->host->i_sb`? Also no. The logic in `__get_metapage` sets `mp->sb` from either `inode->i_sb` or `sbi->direct_inode->i_sb` and `folio` comes from a mapping that is either `inode->i_mapping` or `sbi->direct_inode->i_mapping`, where `host` has the same `i_sb` as the one used to set `mp->sb`. I specifically considered if `i_blkbits` could be different from `sb->s_blocksize_bits`. In some file systems, that is a real possibility, but not in JFS. I double-checked the `jfs_fill_super`, `jfs_mount`, and `chkSuper` functions and confirmed that block size is hardcoded as 4096. I also checked `sbi->l2bsize`, which is derived from the superblock's stored block size. It's also 12, further reinforcing the 4K block size assumption. There's even a check in `metapage_read_folio` and `metapage_write_folio`, which explicitly validates that `i_blkbits` is not smaller than `s_blocksize_bits`. Finally, to address the architectural consideration, I thought about the scenario where `PAGE_SIZE` might be larger than 4K (e.g., 16K or 64K). Even in these cases, `PSIZE` is still hardcoded to 4096. The code then uses bitwise operations in `__get_metapage` to handle the multiple metapages per page. But crucially, the block size remains 4K. I've confirmed that `i_blkbits` is 12 and `sb->s_blocksize_bits` is 12, regardless of the `PAGE_SIZE`. I searched again to verify if there were cases where `sb_set_blocksize` was called with values other than `PSIZE` during JFS's initialization, but that doesn't appear to happen. I am confident that `sb->s_blocksize_bits` is always 12 and `inode->i_blkbits` is always 12. `chkSuper` explicitly forbids the mounting of JFS volumes with a different block size. In conclusion, it's very clear that block size consistency is rigidly enforced in JFS. `sb->s_blocksize_bits` and `folio->mapping->host->i_blkbits` will always be equal. |
| 263/4 | 2026/03/16 16:25 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:metapage_read_folio] Results: map[SourceCode: 564: static int metapage_read_folio(struct file *fp, struct folio *folio)
565: {
566: struct inode *inode = folio->mapping->host;
567: struct bio *bio = NULL;
568: int block_offset;
569: int blocks_per_page = i_blocks_per_folio(inode, folio);
570: sector_t page_start; /* address of page in fs blocks */
571: sector_t pblock;
572: int xlen;
573: unsigned int len;
574: int offset;
575:
576: BUG_ON(!folio_test_locked(folio));
577: page_start = folio_pos(folio) >> inode->i_blkbits;
578:
579: block_offset = 0;
580: while (block_offset < blocks_per_page) {
581: xlen = blocks_per_page - block_offset;
582: pblock = metapage_get_blocks(inode, page_start + block_offset,
583: &xlen);
584: if (pblock) {
585: if (!folio->private)
586: insert_metapage(folio, NULL);
587: inc_io(folio);
588: if (bio)
589: submit_bio(bio);
590:
591: bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_READ,
592: GFP_NOFS);
593: bio->bi_iter.bi_sector =
594: pblock << (inode->i_blkbits - 9);
595: bio->bi_end_io = metapage_read_end_io;
596: bio->bi_private = folio;
597: len = xlen << inode->i_blkbits;
598: offset = block_offset << inode->i_blkbits;
599: bio_add_folio_nofail(bio, folio, len, offset);
600: block_offset += xlen;
601: } else
602: block_offset++;
603: }
604: if (bio)
605: submit_bio(bio);
606: else
607: folio_unlock(folio);
608:
609: return 0;
610: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 264/4 | 2026/03/16 16:25 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 106049 output: 49 thoughts: 55 Thoughts: Okay, let me gather my thoughts on this. **Revised Analysis of Potential Issue** Alright, hang on a second. I need to backtrack. I was certain I saw that size comparison in the code, something about `l2bsize` and the block size. But on closer inspection... *nope*. It's not there. I misread something initially. That's fine, happens. I'll need to re-evaluate. The discrepancy is gone, so my initial concern is addressed. Let me switch gears now. My next step will be to examine the code related to `metapage_write_folio`. I'll see if the intended operations are occuring there. |
| 265/4 | 2026/03/16 16:25 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:metapage_write_folio] Results: map[SourceCode: 423: static int metapage_write_folio(struct folio *folio,
424: struct writeback_control *wbc)
425: {
426: struct bio *bio = NULL;
427: int block_offset; /* block offset of mp within page */
428: struct inode *inode = folio->mapping->host;
429: int blocks_per_mp = JFS_SBI(inode->i_sb)->nbperpage;
430: int len;
431: int xlen;
432: struct metapage *mp;
433: int redirty = 0;
434: sector_t lblock;
435: int nr_underway = 0;
436: sector_t pblock;
437: sector_t next_block = 0;
438: sector_t page_start;
439: unsigned long bio_bytes = 0;
440: unsigned long bio_offset = 0;
441: int offset;
442: int bad_blocks = 0;
443:
444: page_start = folio_pos(folio) >> inode->i_blkbits;
445: BUG_ON(!folio_test_locked(folio));
446: BUG_ON(folio_test_writeback(folio));
447: folio_start_writeback(folio);
448:
449: for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
450: mp = folio_to_mp(folio, offset);
451:
452: if (!mp || !test_bit(META_dirty, &mp->flag))
453: continue;
454:
455: if (mp->nohomeok && !test_bit(META_forcewrite, &mp->flag)) {
456: redirty = 1;
457: /*
458: * Make sure this page isn't blocked indefinitely.
459: * If the journal isn't undergoing I/O, push it
460: */
461: if (mp->log && !(mp->log->cflag & logGC_PAGEOUT))
462: jfs_flush_journal(mp->log, 0);
463: continue;
464: }
465:
466: clear_bit(META_dirty, &mp->flag);
467: set_bit(META_io, &mp->flag);
468: block_offset = offset >> inode->i_blkbits;
469: lblock = page_start + block_offset;
470: if (bio) {
471: if (xlen && lblock == next_block) {
472: /* Contiguous, in memory & on disk */
473: len = min(xlen, blocks_per_mp);
474: xlen -= len;
475: bio_bytes += len << inode->i_blkbits;
476: continue;
477: }
478: /* Not contiguous */
479: bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
480: /*
481: * Increment counter before submitting i/o to keep
482: * count from hitting zero before we're through
483: */
484: inc_io(folio);
485: if (!bio->bi_iter.bi_size)
486: goto dump_bio;
487: submit_bio(bio);
488: nr_underway++;
489: bio = NULL;
490: } else
491: inc_io(folio);
492: xlen = (folio_size(folio) - offset) >> inode->i_blkbits;
493: pblock = metapage_get_blocks(inode, lblock, &xlen);
494: if (!pblock) {
495: printk(KERN_ERR "JFS: metapage_get_blocks failed\n");
496: /*
497: * We already called inc_io(), but can't cancel it
498: * with dec_io() until we're done with the page
499: */
500: bad_blocks++;
501: continue;
502: }
503: len = min(xlen, (int)JFS_SBI(inode->i_sb)->nbperpage);
504:
505: bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_WRITE, GFP_NOFS);
506: bio->bi_iter.bi_sector = pblock << (inode->i_blkbits - 9);
507: bio->bi_end_io = metapage_write_end_io;
508: bio->bi_private = folio;
509:
510: /* Don't call bio_add_page yet, we may add to this vec */
511: bio_offset = offset;
512: bio_bytes = len << inode->i_blkbits;
513:
514: xlen -= len;
515: next_block = lblock + len;
516: }
517: if (bio) {
518: bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
519: if (!bio->bi_iter.bi_size)
520: goto dump_bio;
521:
522: submit_bio(bio);
523: nr_underway++;
524: }
525: if (redirty)
526: folio_redirty_for_writepage(wbc, folio);
527:
528: folio_unlock(folio);
529:
530: if (bad_blocks)
531: goto err_out;
532:
533: if (nr_underway == 0)
534: folio_end_writeback(folio);
535:
536: return 0;
537: dump_bio:
538: print_hex_dump(KERN_ERR, "JFS: dump of bio: ", DUMP_PREFIX_ADDRESS, 16,
539: 4, bio, sizeof(*bio), 0);
540: bio_put(bio);
541: folio_unlock(folio);
542: dec_io(folio, BLK_STS_OK, last_write_complete);
543: err_out:
544: while (bad_blocks--)
545: dec_io(folio, BLK_STS_OK, last_write_complete);
546: return -EIO;
547: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 266/4 | 2026/03/16 16:25 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 108263 output: 49 thoughts: 266 Thoughts: **Analyzing Block Size Consistency in `metapage_write_folio`** Okay, let's break down this `metapage_write_folio` function. I see a consistent reliance on `inode->i_blkbits` throughout the code, which immediately piques my interest. Specifically, lines like the calculation of `page_start`, `block_offset`, `bio_bytes`, and `xlen` all seem to depend on this value. And then there's this line here: `bio->bi_iter.bi_sector = pblock << (inode->i_blkbits - 9);`. That's where things get interesting, because the sector address is being calculated using `inode->i_blkbits`. If `inode->i_blkbits` is *different* from the superblock's `s_blocksize_bits`, then this shift operation is essentially calculating the sector based on the inode's block size, not necessarily the underlying device's sector size. Given that `pblock` is obtained from `metapage_get_blocks`, it's critical to understand how block allocation relates to the reported block size. I need to dig into `metapage_get_blocks` to understand how that translation happens. It's that `pblock` return value that is key now. |
| 267/4 | 2026/03/16 16:25 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:metapage_get_blocks] Results: map[SourceCode: 323: static sector_t metapage_get_blocks(struct inode *inode, sector_t lblock,
324: int *len)
325: {
326: int rc = 0;
327: int xflag;
328: s64 xaddr;
329: sector_t file_blocks = (inode->i_size + inode->i_sb->s_blocksize - 1) >>
330: inode->i_blkbits;
331:
332: if (lblock >= file_blocks)
333: return 0;
334: if (lblock + *len > file_blocks)
335: *len = file_blocks - lblock;
336:
337: if (inode->i_ino) {
338: rc = xtLookup(inode, (s64)lblock, *len, &xflag, &xaddr, len, 0);
339: if ((rc == 0) && *len)
340: lblock = (sector_t)xaddr;
341: else
342: lblock = 0;
343: } /* else no mapping */
344:
345: return lblock;
346: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 268/4 | 2026/03/16 16:25 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 109259 output: 23 thoughts: 583 Thoughts: **JFS `i_blkbits` and Filesystem Block Size - A Deep Dive** Okay, so I'm digging into how JFS handles block sizes, specifically around metapages. My initial thought process revolves around how `metapage_get_blocks` is implemented. It's using both `inode->i_sb->s_blocksize` and `inode->i_blkbits` to determine file blocks. This immediately raises a red flag. If these two values weren't inherently linked, it would be extremely suspect and probably a bug. Looking at how `xtLookup` works, it uses `lblock`, which is based on `i_blkbits`. The way the JFS `xtree` is designed, it always operates using the underlying filesystem block size, which is commonly 4KB. Therefore, if `i_blkbits` was anything other than 12 (log base 2 of 4096), there would be a unit mismatch between `xtLookup`'s input and its expected block size. This confirms my suspicion that they are related. So, logically, in JFS, `i_blkbits` must *always* equal `sb->s_blocksize_bits`. And since `mp->sb` in the metapage struct is derived from the inode's superblock, which is the same as the superblock associated with the folio through the mapping and inode (`folio->mapping->host->i_sb`), the answer to the initial question is affirmative. `i_blkbits` does reflect the filesystem block size. Now, let's play devil's advocate and check for any edge cases. I'm going back to the source, looking specifically at `__get_metapage`. I want to verify that `mp->sb` actually does always equal `folio->mapping->host->i_sb`. From the code, it looks as though `mp->sb` is set to `inode->i_sb`. The `folio` associated with the metapage comes from a `mapping`, which is the `inode->i_mapping` or perhaps a direct inode mapping. Both of these are linked to an inode. This means that both cases point back to the same superblock. So I need to confirm `mp->sb` is the same as `folio->mapping->host->i_sb`. Looking at `folio` and its host, it appears my suspicions were correct. The last thing I want to be certain about is whether there might be some odd scenario where the `folio->mapping->host` is *not* a JFS inode. But, considering that metapages are used exclusively by JFS, the `address_space_operations` for these metapages are always set to `jfs_metapage_aops`, which are only set on JFS inodes. Alright, I think that's about it. To make absolutely sure, I'm going to scan for other calls to `alloc_metapage` just in case I missed any edge cases in the code. |
| 269/4 | 2026/03/16 16:26 | tool | codesearch-find-references |
0mArgs:map[Name:alloc_metapage] Results: map[References:[map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:__get_metapage SourceFile:fs/jfs/jfs_metapage.c SourceLine:760]] TruncatedOutput:false] |
| 270/4 | 2026/03/16 16:26 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 110582 output: 754 thoughts: 2124 Thoughts: **Analysis of s_blocksize_bits and i_blkbits in JFS Metapages** Okay, so the question is whether `mp->sb->s_blocksize_bits` is always equal to `folio->mapping->host->i_blkbits` in the JFS implementation. Let's break this down. First, I recall this happens only in `__get_metapage`. My initial thought is "yes," but I need to be thorough. I should check for any oddities regarding `direct_inode`. Looking at `jfs_fill_super`, `direct_inode` is a regular inode created with `new_inode(sb)`. No red flags there. Next, the block device's inode (`direct_inode`) uses its own mapping; it doesn't use the block device's mapping for metapages. Let's revisit `__get_metapage`: if `absolute` is true, we use `direct_inode->i_mapping`, otherwise `inode->i_mapping`. In both cases, the mapping's host is a JFS inode, which is what matters. All JFS inodes have `i_blkbits == sb->s_blocksize_bits`. So, the question becomes: is `sb->s_blocksize_bits` ever *not* 12? I remember from my earlier work that `chkSuper` enforces `bsize == 4096`, and `PSIZE` is 4096. Therefore, `sb->s_blocksize_bits` is indeed always 12. Now, could some generic code change `inode->i_blkbits`? `sb_set_blocksize` is the method to do that, but JFS only calls it once. `inode_init_always` sets it from `sb->s_blocksize_bits`. This reinforces my confidence. Okay, let's look at this "always" aspect. Can `sb->s_blocksize_bits` and `inode->i_blkbits` ever differ during a metapage's lifetime? Since neither change in JFS, they'll remain equal. What about `folio->mapping->host` being NULL? That's a crash. The JFS code assumes the `folio->mapping` is valid, as evidenced in functions like `metapage_read_folio` and `metapage_write_folio`. If the question is mathematical, meaning in the current JFS implementation, then the answer is "yes." What about an edge case: `folio->mapping->host` being a different inode than the one used to create the metapage? No. In `__get_metapage`, `mp->sb` is set to `inode->i_sb`. And `mp->folio` is set to a folio from `mapping`, where `mapping` is either `inode->i_mapping` or `sbi->direct_inode->i_mapping`. Hence `mp->folio->mapping->host` is `inode` or `sbi->direct_inode`. Both share the same `i_sb` as the `inode` passed to `__get_metapage`. So, `mp->sb == mp->folio->mapping->host->i_sb`. And because all JFS inodes have `i_blkbits == i_sb->s_blocksize_bits`, they're equal. I should also make sure `i_blkbits` is *never* different from `s_blocksize_bits` for *any* inode in the kernel. Typically, it is initialized from `s_blocksize_bits`. Some filesystems might change it, but JFS doesn't. `direct_inode` is not special; it is created by `new_inode`. The special inodes such as `ipbmap`, `ipaimap`, `ipaimap2`, and `ipimap` are all created by `diReadSpecial`, which calls `new_inode(sb)`. All have `i_blkbits == sb->s_blocksize_bits`. I'm feeling confident. Let me confirm the type of `mp->sb` is a `struct super_block *`. It is. So, the core question: Is `mp->sb->s_blocksize_bits` always equal to `folio->mapping->host->i_blkbits`? Yes, because: 1. `mp->sb` is the `i_sb` of the folio's host inode. 2. JFS only supports 4K blocks (enforced by `chkSuper`), so `s_blocksize_bits` is always 12. 3. JFS inodes *always* have `i_blkbits` set to `s |
| 271/2 | 2026/03/16 16:26 | llm | debugger |
0mModel:gemini-3.1-pro-preview Tokens: input: 198439 output: 1183 thoughts: 728 Thoughts: **Root Cause and Fix for the JFS NULL Pointer Dereference** Okay, here's what's happening. I've dissected the issue, and it boils down to a classic NULL pointer dereference in the JFS file system code, specifically within `metapage_write_folio` at `fs/jfs/jfs_metapage.c:428`. The critical line is: ```c struct inode *inode = folio->mapping->host; ``` The problem is that `folio->mapping` is, unfortunately, NULL at this point. This happens because the folio, which is essentially a chunk of data representing a file's metadata, has been removed from the page cache. This is where things get interesting. In JFS, directory index tables are stored using the inode's mapping, `ip->i_mapping`. The scenario unfolds like this: When a directory gets deleted, the `jfs_rmdir` function removes its entry in its parent directory. If the directory then becomes empty, `dtInitRoot` kicks in, truncating the index table using `xtTruncate`, setting the `COMMIT_Stale` flag on the empty directory. Following this, the calling function (could be `jfs_unlink` or `jfs_rmdir`) checks the `COMMIT_Stale` flag and calls `jfs_truncate_nolock(ip, 0)`. `jfs_truncate_nolock` then uses `truncate_setsize(ip, 0)` to wipe out all the folios associated with that directory's index table from the page cache. This is the key: `folio->mapping` gets set to NULL in this step. However, the dirty metapages representing the *changed* index table data remain in the transaction's `tlock` list, representing changes made before the truncation. When the transaction commits (synchronously or asynchronously by `jfs_lazycommit`), `txUnlock` is called. It eventually calls `put_metapage(mp)`, which leads to `release_metapage(mp)`. `release_metapage` sees the `META_dirty` flag is set, correctly identifies the change, and proceeds to call `metapage_write_one(folio)`. Which then calls `metapage_write_folio(folio, &wbc)`. And there it is: the problematic call to `folio->mapping->host` results in a crash, because `folio->mapping` is NULL. So, the fix? We need to add a crucial check in `release_metapage` and `force_metapage`. Before attempting to write the metapage, we must verify if `folio->mapping` is NULL. If it *is* NULL, we know the folio has been truncated and removed. In this case, we **do not** want to write it back to disk. Instead, we should clear the `META_dirty` and `META_sync` flags, unlock the folio (in `force_metapage`), and, if necessary, remove the metapage from the logsync list. This ensures that the NULL pointer dereference is avoided and, more importantly, `drop_metapage` can free the metapage memory without issues preventing a memory leak. Additionally, I see a potential hazard on architectures where `PAGE_SIZE` is greater than 4096 (e.g., 64K pages on ARM64 or PPC). In those scenarios, functions like `remove_metapage` and `insert_metapage` also dereference `folio->mapping->host->i_blkbits`, which could lead to similar crashes if `folio->mapping` is NULL. To eliminate this risk, we must update these functions to utilize `mp->sb->s_blocksize_bits` instead of the dereferenced `inode` value to ensure the functionality without the risk of the NULL pointer. |
| 272/1 | 2026/03/16 16:27 | action | kernel-scratch-checkouter |
0mResults:map[KernelScratchSrc:/usr/local/google/home/dvyukov/syzkaller/agent/workdir/cache/tmp/tmp3915903047] |
| 273/1 | 2026/03/16 16:27 | loop |
22m |
|
| 274/2 | 2026/03/16 16:27 | iteration | 0 |
22m |
| 275/3 | 2026/03/16 16:27 | agent | patch-generator |
13mModel:gemini-3.1-pro-preview Instruction: You are an experienced Linux kernel developer tasked with creating a fix for a kernel bug. You will be given a crash report, and an initial explanation of the root cause done by another kernel expert. Use the codeedit tool to do code edits. Note: you will not see your changes when looking at the code using codesearch tools. Your final reply should contain explanation of what you did in the patch and why (details not present in the initial explanation of the bug). Your fix must not just prevent the given crash, but also be the best fix for the underlying root cause from the software engineering point of view. There can be several ways to fix the same bug. Consider alternatives, and pick the best one. For example, additional checks may be added at different locations/functions, it's usually better to place them earlier in the execution to avoid multiple checks at various locations later. Frequently the same coding mistake is done in several locations in the source code. Check if your fix should be extended/applied to similar cases around to fix other similar bugs. But don't go too wide, don't try to fix problems kernel-wide, fix similar issues in the same file only. If you are changing post-conditions of a function, consider all callers of the functions, and if they need to be updated to handle new post-conditions. For example, if you make a function that previously never returned a NULL, return NULL, consider if callers need to be updated to handle NULL return value. Prefer calling several tools at the same time to save round-trips. Prompt:
The crash that corresponds to the bug is:
Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN NOPTI
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
CPU: 1 UID: 0 PID: 121 Comm: jfsCommit Not tainted syzkaller #1 PREEMPT_{RT,(full)}
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.17.0-debian-1.17.0-1 04/01/2014
RIP: 0010:metapage_write_folio+0xa8/0xca0 fs/jfs/jfs_metapage.c:428
Code: e8 cd 34 78 fe 4d 8d 74 24 18 4c 89 f0 48 c1 e8 03 42 80 3c 38 00 74 08 4c 89 f7 e8 82 91 dc fe 4d 8b 36 4c 89 f0 48 c1 e8 03 <42> 80 3c 38 00 74 08 4c 89 f7 e8 69 91 dc fe 49 8b 1e 4d 8d 74 24
RSP: 0018:ffffc90002197840 EFLAGS: 00010256
RAX: 0000000000000000 RBX: 1ffffd400023200e RCX: ffff88801e04bd80
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc90002197950 R08: ffffea0001190077 R09: 1ffffd400023200e
R10: dffffc0000000000 R11: fffff9400023200f R12: ffffea0001190040
R13: 0000000000000081 R14: 0000000000000000 R15: dffffc0000000000
FS: 0000000000000000(0000) GS:ffff8880ecaf3000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f1b44e236e0 CR3: 000000000d9be000 CR4: 0000000000750ef0
PKRU: 55555554
Call Trace:
<TASK>
metapage_write_one+0x24c/0x450 fs/jfs/jfs_metapage.c:818
release_metapage+0x5b9/0xab0 fs/jfs/jfs_metapage.c:884
txUnlock+0x509/0xdc0 fs/jfs/jfs_txnmgr.c:948
txLazyCommit fs/jfs/jfs_txnmgr.c:2683 [inline]
jfs_lazycommit+0x52e/0xa10 fs/jfs/jfs_txnmgr.c:2734
kthread+0x389/0x480 kernel/kthread.c:436
ret_from_fork+0x513/0xba0 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
Modules linked in:
---[ end trace 0000000000000000 ]---
RIP: 0010:metapage_write_folio+0xa8/0xca0 fs/jfs/jfs_metapage.c:428
Code: e8 cd 34 78 fe 4d 8d 74 24 18 4c 89 f0 48 c1 e8 03 42 80 3c 38 00 74 08 4c 89 f7 e8 82 91 dc fe 4d 8b 36 4c 89 f0 48 c1 e8 03 <42> 80 3c 38 00 74 08 4c 89 f7 e8 69 91 dc fe 49 8b 1e 4d 8d 74 24
RSP: 0018:ffffc90002197840 EFLAGS: 00010256
RAX: 0000000000000000 RBX: 1ffffd400023200e RCX: ffff88801e04bd80
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc90002197950 R08: ffffea0001190077 R09: 1ffffd400023200e
R10: dffffc0000000000 R11: fffff9400023200f R12: ffffea0001190040
R13: 0000000000000081 R14: 0000000000000000 R15: dffffc0000000000
FS: 0000000000000000(0000) GS:ffff8880ecaf3000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f1b44e236e0 CR3: 000000000d9be000 CR4: 0000000000750ef0
PKRU: 55555554
----------------
Code disassembly (best guess):
0: e8 cd 34 78 fe call 0xfe7834d2
5: 4d 8d 74 24 18 lea 0x18(%r12),%r14
a: 4c 89 f0 mov %r14,%rax
d: 48 c1 e8 03 shr $0x3,%rax
11: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1)
16: 74 08 je 0x20
18: 4c 89 f7 mov %r14,%rdi
1b: e8 82 91 dc fe call 0xfedc91a2
20: 4d 8b 36 mov (%r14),%r14
23: 4c 89 f0 mov %r14,%rax
26: 48 c1 e8 03 shr $0x3,%rax
* 2a: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1) <-- trapping instruction
2f: 74 08 je 0x39
31: 4c 89 f7 mov %r14,%rdi
34: e8 69 91 dc fe call 0xfedc91a2
39: 49 8b 1e mov (%r14),%rbx
3c: 4d rex.WRB
3d: 8d .byte 0x8d
3e: 74 24 je 0x64
The explanation of the root cause of the bug is:
The crash is a NULL pointer dereference in `metapage_write_folio()` caused by `folio->mapping` being `NULL`.
### Root Cause Analysis
In JFS, directory index tables are stored in the directory inode's mapping (`ip->i_mapping`) and accessed via metapages.
When a directory is deleted, the following sequence of events occurs:
1. `jfs_rmdir()` removes the directory entry from its parent. If the directory becomes empty, `dtInitRoot()` is called.
2. `dtInitRoot()` truncates the directory's index table by calling `xtTruncate(tid, ip, 0, COMMIT_PWMAP)` and sets the `COMMIT_Stale` flag on the child directory.
3. The caller (e.g., `jfs_rmdir` or `jfs_unlink`) checks the `COMMIT_Stale` flag and calls `jfs_truncate_nolock(ip, 0)`.
4. `jfs_truncate_nolock()` calls `truncate_setsize(ip, 0)`, which removes all folios of the directory's index table from the page cache. As part of this removal, `folio->mapping` is set to `NULL`.
However, the dirty metapages of the index table (modified during the deletion process) are still in the `tlock` list of the transaction.
When the transaction is eventually committed, `txUnlock()` is called (often asynchronously by the `jfs_lazycommit` kthread).
1. `txUnlock()` calls `put_metapage(mp)`, which in turn calls `release_metapage(mp)`.
2. `release_metapage()` sees that `test_bit(META_dirty, &mp->flag)` is true, and attempts to write the page back by calling `metapage_write_one(folio)`.
3. `metapage_write_one()` calls `metapage_write_folio(folio, &wbc)`.
4. `metapage_write_folio()` executes `struct inode *inode = folio->mapping->host;`. Since `folio->mapping` is `NULL`, this results in a general protection fault / NULL pointer dereference.
### How to Fix
To fix this, `release_metapage()` and `force_metapage()` must check if `folio->mapping` is `NULL` before attempting to write the metapage. If it is `NULL`, it means the folio has been truncated and removed from the page cache. The metapage should not be written back. Instead, we should clear the `META_dirty` and `META_sync` flags, and remove the metapage from the logsync list if necessary.
Clearing the `META_dirty` flag is crucial because `drop_metapage()` checks this flag; if it remains set, `drop_metapage()` will return early, causing a permanent memory leak of the metapage structure.
**1. Fix in `release_metapage` (`fs/jfs/jfs_metapage.c:880`):**
```c
if (folio->mapping) {
if (test_bit(META_dirty, &mp->flag)) {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
} else if (mp->lsn) /* discard_metapage doesn't remove it */
remove_from_logsync(mp);
} else {
clear_bit(META_dirty, &mp->flag);
clear_bit(META_sync, &mp->flag);
if (mp->lsn)
remove_from_logsync(mp);
}
```
**2. Fix in `force_metapage` (`fs/jfs/jfs_metapage.c:839`):**
```c
folio_lock(folio);
if (folio->mapping) {
folio_mark_dirty(folio);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
} else {
clear_bit(META_dirty, &mp->flag);
folio_unlock(folio);
}
clear_bit(META_forcewrite, &mp->flag);
```
**Additional Note for architectures with `PAGE_SIZE > 4096`:**
On architectures where `PAGE_SIZE > 4096` (e.g., 64K pages on ARM64/PPC), `insert_metapage()` and `remove_metapage()` also dereference `folio->mapping->host->i_blkbits`. To prevent similar crashes there when `folio->mapping` is `NULL`, they should be changed to use `mp->sb->s_blocksize_bits` instead, which is always safely available and identical to `i_blkbits` in JFS.
Reply: I have analyzed the root cause and implemented a comprehensive fix to address the NULL pointer dereference in `metapage_write_folio()` and prevent similar issues across `fs/jfs/jfs_metapage.c`. ### What I did in the patch and why: 1. **Fixed `release_metapage()` and `force_metapage()`:** As identified in the root cause analysis, when a directory is deleted, its index table is truncated, which removes folios from the page cache and sets `folio->mapping` to `NULL`. However, dirty metapages can still be lingering in the transaction's `tlock` list. When `txUnlock()` eventually calls `release_metapage()`, it attempts to write the page back via `metapage_write_one()`, leading to a crash. I added a check for `folio->mapping` in both `release_metapage()` and `force_metapage()`. If the mapping is `NULL`, it means the folio has been truncated. Instead of writing it back, we now clear the `META_dirty` and `META_sync` flags, and remove the metapage from the logsync list if necessary. Clearing `META_dirty` is crucial because `drop_metapage()` checks this flag; if it remains set, `drop_metapage()` returns early, causing a permanent memory leak of the metapage structure. 2. **Fixed `insert_metapage()` and `remove_metapage()` for architectures with `PAGE_SIZE > 4096`:** On architectures with larger page sizes (e.g., 64K pages on ARM64/PPC), `MPS_PER_PAGE > 1`. In this configuration, `insert_metapage()` and `remove_metapage()` were dereferencing `folio->mapping->host->i_blkbits` to calculate the log2 blocks per metapage. If these functions are called on a truncated folio (where `folio->mapping` is `NULL`), this would result in another NULL pointer dereference. I changed these functions to use `mp->sb->s_blocksize_bits` instead. The metapage's superblock pointer (`mp->sb`) is always safely available and its blocksize bits are identical to the inode's `i_blkbits` in JFS, making this a safe and robust alternative. 3. **Fixed `last_write_complete()`:** During my review of other places where `folio->mapping` is dereferenced, I found that `last_write_complete()` (the bio `end_io` callback for metapage writes) calls `mapping_set_error(folio->mapping, err)` if an I/O error occurs. If a folio is truncated while it is under writeback, `folio->mapping` becomes `NULL`. When the writeback completes with an error, this would cause a crash. I added a simple `if (folio->mapping)` check before calling `mapping_set_error()` to prevent this. These changes collectively ensure that JFS gracefully handles metapages associated with truncated folios across all architectures and execution paths, preventing both kernel crashes and memory leaks. |
| 276/4 | 2026/03/16 16:27 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 6730 output: 32 thoughts: 329 Thoughts: **JFS Metapage Bug Analysis and Proposed Fix** Okay, so I've been digging into this JFS issue, and it looks like we have a nasty little bug in `metapage_write_folio()` that's causing a crash. It's a classic NULL pointer dereference – `folio->mapping` is unexpectedly NULL, and boom. The root cause, as I understand it, stems from directory deletion and index table truncation. Basically, when a directory gets the axe, we're removing folios from the page cache, which, understandably, sets `folio->mapping` to NULL. The kicker is, even though those folios are "gone" from the cache, dirty metapages are still clinging to the `tlock` list within the active transaction. When the transaction commits, `txUnlock()` gets called, which in turn calls `put_metapage()`. That's where things go sideways: `put_metapage()` calls `release_metapage()`, and *that* calls `metapage_write_one()`, which finally gets to `metapage_write_folio()`. And that's where the dereference happens – `folio->mapping->host`. We're trying to write back a page that doesn't have a valid mapping. So, the proposed fix is pretty straightforward, and it makes sense. We need to explicitly check if `folio->mapping` is NULL within both `release_metapage()` and `force_metapage()`. If it *is* NULL, we simply *don't* write the metapage back. Instead, we should clear the `META_dirty` and `META_sync` flags, and, if needed, take the metapage off the logsync list. That'll prevent the write and avoid the crash. Also, I need to keep in mind, we have the architecture considerations for those systems with `PAGE_SIZE > 4096`. For those, `insert_metapage()` and `remove_metapage()` are dereferencing `folio->mapping->host->i_blkbits`. We should change those to use `mp->sb->s_blocksize_bits` instead. That will make it more robust for larger page sizes. Time to dive into `fs/jfs/jfs_metapage.c` and get this patched up. |
| 277/4 | 2026/03/16 16:27 | tool | codesearch-file-index |
0mArgs:map[SourceFile:fs/jfs/jfs_metapage.c] Results: map[Entities:[map[Kind:function Name:__get_metapage] map[Kind:function Name:__invalidate_metapages] map[Kind:function Name:__lock_metapage] map[Kind:function Name:__metapage_migrate_folio] map[Kind:function Name:alloc_metapage] map[Kind:function Name:drop_metapage] map[Kind:function Name:folio_to_mp] map[Kind:function Name:force_metapage] map[Kind:function Name:free_metapage] map[Kind:function Name:grab_metapage] map[Kind:function Name:hold_metapage] map[Kind:function Name:insert_metapage] map[Kind:function Name:last_read_complete] map[Kind:function Name:last_write_complete] map[Kind:function Name:lock_metapage] map[Kind:function Name:metapage_exit] map[Kind:function Name:metapage_get_blocks] map[Kind:function Name:metapage_init] map[Kind:function Name:metapage_invalidate_folio] map[Kind:function Name:metapage_migrate_folio] map[Kind:function Name:metapage_read_end_io] map[Kind:function Name:metapage_read_folio] map[Kind:function Name:metapage_release_folio] map[Kind:function Name:metapage_write_end_io] map[Kind:function Name:metapage_write_folio] map[Kind:function Name:metapage_write_one] map[Kind:function Name:metapage_writepages] map[Kind:function Name:put_metapage] map[Kind:function Name:release_metapage] map[Kind:function Name:remove_from_logsync] map[Kind:function Name:remove_metapage] map[Kind:function Name:unlock_metapage] map[Kind:global_variable Name:jfs_metapage_aops]]] |
| 278/4 | 2026/03/16 16:27 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 8177 output: 188 thoughts: 37 Thoughts: Okay, let's dive into these functions, really get a grip on their interplay. I need to refresh my understanding of how metapages are handled at this level. **Deep Dive on Metapage Management** Right, so I'm thinking about the low-level metapage management functions: `release_metapage()`, `force_metapage()`, `insert_metapage()`, and `remove_metapage()`. I need to mentally walk through the logic of each, and how they relate to the bigger picture. I'm going to quickly review how these pieces fit within the larger data structures. I need to remind myself of the conditions that trigger each of these operations. Are we talking about explicit calls, or are some triggered implicitly, by other events? Does the use of these functions differ between architectures? I need to look for any potential race conditions or locking issues within the context of these. What are the side effects of each action? Are there any performance bottlenecks introduced by frequent metapage operations? It's essential to understand the potential consequences of each, especially when it comes to memory management and data consistency. Let's see how each works. |
| 279/4 | 2026/03/16 16:27 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:release_metapage] Results: map[SourceCode: 865: void release_metapage(struct metapage * mp)
866: {
867: struct folio *folio = mp->folio;
868: jfs_info("release_metapage: mp = 0x%p, flag = 0x%lx", mp, mp->flag);
869:
870: folio_lock(folio);
871: unlock_metapage(mp);
872:
873: assert(mp->count);
874: if (--mp->count || mp->nohomeok) {
875: folio_unlock(folio);
876: folio_put(folio);
877: return;
878: }
879:
880: if (test_bit(META_dirty, &mp->flag)) {
881: folio_mark_dirty(folio);
882: if (test_bit(META_sync, &mp->flag)) {
883: clear_bit(META_sync, &mp->flag);
884: if (metapage_write_one(folio))
885: jfs_error(mp->sb, "metapage_write_one() failed\n");
886: folio_lock(folio);
887: }
888: } else if (mp->lsn) /* discard_metapage doesn't remove it */
889: remove_from_logsync(mp);
890:
891: /* Try to keep metapages from using up too much memory */
892: drop_metapage(folio, mp);
893:
894: folio_unlock(folio);
895: folio_put(folio);
896: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 280/4 | 2026/03/16 16:27 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:force_metapage] Results: map[SourceCode: 831: void force_metapage(struct metapage *mp)
832: {
833: struct folio *folio = mp->folio;
834: jfs_info("force_metapage: mp = 0x%p", mp);
835: set_bit(META_forcewrite, &mp->flag);
836: clear_bit(META_sync, &mp->flag);
837: folio_get(folio);
838: folio_lock(folio);
839: folio_mark_dirty(folio);
840: if (metapage_write_one(folio))
841: jfs_error(mp->sb, "metapage_write_one() failed\n");
842: clear_bit(META_forcewrite, &mp->flag);
843: folio_put(folio);
844: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 281/4 | 2026/03/16 16:27 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:insert_metapage] Results: map[SourceCode: 213: static inline int insert_metapage(struct folio *folio, struct metapage *mp)
214: {
215: if (mp) {
216: folio_attach_private(folio, mp);
217: kmap(&folio->page);
218: }
219: return 0;
220: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 282/4 | 2026/03/16 16:27 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:remove_metapage] Results: map[SourceCode: 222: static inline void remove_metapage(struct folio *folio, struct metapage *mp)
223: {
224: folio_detach_private(folio);
225: kunmap(&folio->page);
226: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 283/4 | 2026/03/16 16:27 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 9695 output: 30 thoughts: 146 Thoughts: **Investigating Potential Data Corruption in JFS Metapage Handling** Okay, hold on a second. I'm looking at `fs/jfs/jfs_metapage.c` and specifically at the functions `insert_metapage` and `remove_metapage`. It strikes me as odd that they don't seem to be dereferencing `folio->mapping->host->i_blkbits` directly. Now, that's not right...unless there's something I'm missing. Ah, there it is. The note in the code explains that on architectures where the `PAGE_SIZE` is greater than 4096 bytes – like on ARM64 or PPC with their 64K pages – those functions *do* dereference `folio->mapping->host->i_blkbits`. The code's behavior changes based on that architecture parameter. Clever, keeping it contained, but let's just confirm those are the *only* definitions of those functions. I want to be certain there isn't some other variant lurking that's doing something unexpected, maybe related to different block sizes. Let's make sure. |
| 284/4 | 2026/03/16 16:27 | tool | codesearch-find-references |
0mArgs:map[IncludeSnippetLines:10 Name:insert_metapage] Results: map[References:[map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:__get_metapage SourceFile:fs/jfs/jfs_metapage.c SourceLine:772 SourceSnippet: 762: goto unlock;
763: mp->folio = folio;
764: mp->sb = inode->i_sb;
765: mp->flag = 0;
766: mp->xflag = COMMIT_PAGE;
767: mp->count = 1;
768: mp->nohomeok = 0;
769: mp->logical_size = size;
770: mp->data = folio_address(folio) + page_offset;
771: mp->index = lblock;
772: if (unlikely(insert_metapage(folio, mp))) {
773: free_metapage(mp);
774: goto unlock;
775: }
776: lock_metapage(mp);
777: }
778:
779: if (new) {
780: jfs_info("zeroing mp = 0x%p", mp);
781: memset(mp->data, 0, PSIZE);
782: }
] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:__metapage_migrate_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:248 SourceSnippet: 238: int rc;
239:
240: mp = folio_to_mp(src, 0);
241: if (metapage_locked(mp))
242: return -EAGAIN;
243:
244: rc = filemap_migrate_folio(mapping, dst, src, mode);
245: if (rc)
246: return rc;
247:
248: if (unlikely(insert_metapage(dst, mp)))
249: return -EAGAIN;
250:
251: page_offset = mp->data - folio_address(src);
252: mp->data = folio_address(dst) + page_offset;
253: mp->folio = dst;
254: remove_metapage(src, mp);
255:
256: return 0;
257: }
] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:metapage_read_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:586 SourceSnippet: 576: BUG_ON(!folio_test_locked(folio));
577: page_start = folio_pos(folio) >> inode->i_blkbits;
578:
579: block_offset = 0;
580: while (block_offset < blocks_per_page) {
581: xlen = blocks_per_page - block_offset;
582: pblock = metapage_get_blocks(inode, page_start + block_offset,
583: &xlen);
584: if (pblock) {
585: if (!folio->private)
586: insert_metapage(folio, NULL);
587: inc_io(folio);
588: if (bio)
589: submit_bio(bio);
590:
591: bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_READ,
592: GFP_NOFS);
593: bio->bi_iter.bi_sector =
594: pblock << (inode->i_blkbits - 9);
595: bio->bi_end_io = metapage_read_end_io;
596: bio->bi_private = folio;
]] TruncatedOutput:false] |
| 285/4 | 2026/03/16 16:27 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 11247 output: 47 thoughts: 29 Thoughts: Here's my take on the thought process, from an expert's perspective: **Diving into the JFS Metapage Code** Okay, hold on a second. Before we get too far down this rabbit hole, let's zoom in on something specific. I need to refresh my memory on how JFS handles metapages, especially in the context of what we're discussing. I think I remember the relevant section being around `fs/jfs/jfs_metapage.c`, right? Let me pull up that file and pinpoint that section around line 213. Yeah, let's take a look there; that's probably where the key logic resides for [add context, what the expert is working on]. I'm expecting to see [expert's specific expectations based on their knowledge, e.g., how the metadata is read, written, cached, or synchronized]. Hopefully, it'll illuminate the interactions in the broader scope. Let's see what that code block is actually doing. |
| 286/4 | 2026/03/16 16:28 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:insert_metapage] Results: map[SourceCode: 213: static inline int insert_metapage(struct folio *folio, struct metapage *mp)
214: {
215: if (mp) {
216: folio_attach_private(folio, mp);
217: kmap(&folio->page);
218: }
219: return 0;
220: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 287/4 | 2026/03/16 16:28 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 11680 output: 19 thoughts: 37 Thoughts: Okay, here's my interpretation of those thoughts, tailored for an expert audience, presented in the first person: **Navigating JFS Metapage Implementation** Right, so I'm looking at `fs/jfs/jfs_metapage.c`, and my initial focus is around line 213. I see there are a few `#ifdef` directives sprinkled around, which naturally piques my interest. These conditional compilation blocks often indicate variations in behavior based on the underlying architecture, kernel configuration, or specific build options. I'll need to carefully examine what each `#ifdef` guards, as that will likely provide insight into how metapage management is adapted for different scenarios within the JFS filesystem. The context around line 213 will be critical to understand the larger implications and the design decisions. I suspect these variations could relate to aspects like page size, memory management strategies, or hardware-specific optimizations for metapage handling. Let's dig in and see what we find. |
| 288/4 | 2026/03/16 16:28 | tool | grepper |
0mArgs:map[Expression:insert_metapage] Results: map[Output:fs/jfs/jfs_metapage.c=84=static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
--
fs/jfs/jfs_metapage.c-92-
fs/jfs/jfs_metapage.c:93:static inline int insert_metapage(struct folio *folio, struct metapage *mp)
fs/jfs/jfs_metapage.c-94-{
--
fs/jfs/jfs_metapage.c=156=static int __metapage_migrate_folio(struct address_space *mapping,
--
fs/jfs/jfs_metapage.c-178- continue;
fs/jfs/jfs_metapage.c:179: if (unlikely(insert_metapage(dst, mp))) {
fs/jfs/jfs_metapage.c-180- /* If error, roll-back previosly inserted pages */
--
fs/jfs/jfs_metapage.c=208=static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
--
fs/jfs/jfs_metapage.c-212-
fs/jfs/jfs_metapage.c:213:static inline int insert_metapage(struct folio *folio, struct metapage *mp)
fs/jfs/jfs_metapage.c-214-{
--
fs/jfs/jfs_metapage.c=232=static int __metapage_migrate_folio(struct address_space *mapping,
--
fs/jfs/jfs_metapage.c-247-
fs/jfs/jfs_metapage.c:248: if (unlikely(insert_metapage(dst, mp)))
fs/jfs/jfs_metapage.c-249- return -EAGAIN;
--
fs/jfs/jfs_metapage.c=564=static int metapage_read_folio(struct file *fp, struct folio *folio)
--
fs/jfs/jfs_metapage.c-585- if (!folio->private)
fs/jfs/jfs_metapage.c:586: insert_metapage(folio, NULL);
fs/jfs/jfs_metapage.c-587- inc_io(folio);
--
fs/jfs/jfs_metapage.c=683=struct metapage *__get_metapage(struct inode *inode, unsigned long lblock,
--
fs/jfs/jfs_metapage.c-771- mp->index = lblock;
fs/jfs/jfs_metapage.c:772: if (unlikely(insert_metapage(folio, mp))) {
fs/jfs/jfs_metapage.c-773- free_metapage(mp);
] |
| 289/4 | 2026/03/16 16:28 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 12701 output: 19 thoughts: 30 Thoughts: Here's my analysis of the provided code snippet, thinking through it as if I were the one reading and interpreting it: **Diving into `jfs_metapage.c` (Lines 84-230)** Okay, let's get down to business. I'm focusing on lines 84 through 230 in `fs/jfs/jfs_metapage.c`. Right away, I see this section deals with metapages, which immediately tells me we're likely in the weeds of JFS's internal data structures and how it handles on-disk metadata. I'm assuming this will be critical for understanding how JFS manages block allocation, journaling, and recovery. I'm expecting to see functions dealing with reading, writing, and possibly caching metadata blocks. The naming convention hints that this is all specifically related to the JFS file system implementation, making this a focused dive. I'll be looking for key data structures – `struct metapage`, maybe? – and how they are used. I need to understand the relationships between different metadata components like inode maps, extent trees, and the superblock. I'll need to pay close attention to the lock management; these kinds of operations are usually very sensitive to concurrency and race conditions, so I'm expecting to see spinlocks or mutexes. I'll also be watching for error handling. JFS, like any robust file system, *has* to gracefully handle I/O errors and corrupted metadata. The details on how and when JFS chooses to commit or flush metadata to disk will be critical. I'll be focused on understanding the core mechanics of the metadata management. How does it track free blocks? How does it handle allocation and deallocation? I anticipate a good chunk of this will revolve around managing and optimizing metadata accesses. Let's see how smart JFS is with respect to caching and prefetching. I'll be particularly interested in whether there are any clever techniques to minimize I/O latency. I'm also anticipating some interaction with the journaling subsystem, especially if the operations have to be atomic. The goal is to fully understand how JFS ensures data integrity. |
| 290/4 | 2026/03/16 16:28 | tool | grepper |
0mArgs:map[Expression:remove_metapage] Results: map[Output:fs/jfs/jfs_metapage.c=93=static inline int insert_metapage(struct folio *folio, struct metapage *mp)
--
fs/jfs/jfs_metapage.c-117-
fs/jfs/jfs_metapage.c:118:static inline void remove_metapage(struct folio *folio, struct metapage *mp)
fs/jfs/jfs_metapage.c-119-{
--
fs/jfs/jfs_metapage.c=156=static int __metapage_migrate_folio(struct address_space *mapping,
--
fs/jfs/jfs_metapage.c-182- if (mps[j])
fs/jfs/jfs_metapage.c:183: remove_metapage(dst, mps[j]);
fs/jfs/jfs_metapage.c-184- }
--
fs/jfs/jfs_metapage.c-197- mp->folio = dst;
fs/jfs/jfs_metapage.c:198: remove_metapage(src, mp);
fs/jfs/jfs_metapage.c-199- }
--
fs/jfs/jfs_metapage.c=213=static inline int insert_metapage(struct folio *folio, struct metapage *mp)
--
fs/jfs/jfs_metapage.c-221-
fs/jfs/jfs_metapage.c:222:static inline void remove_metapage(struct folio *folio, struct metapage *mp)
fs/jfs/jfs_metapage.c-223-{
--
fs/jfs/jfs_metapage.c=232=static int __metapage_migrate_folio(struct address_space *mapping,
--
fs/jfs/jfs_metapage.c-253- mp->folio = dst;
fs/jfs/jfs_metapage.c:254: remove_metapage(src, mp);
fs/jfs/jfs_metapage.c-255-
--
fs/jfs/jfs_metapage.c=309=static inline void drop_metapage(struct folio *folio, struct metapage *mp)
--
fs/jfs/jfs_metapage.c-313- return;
fs/jfs/jfs_metapage.c:314: remove_metapage(folio, mp);
fs/jfs/jfs_metapage.c-315- INCREMENT(mpStat.pagefree);
--
fs/jfs/jfs_metapage.c=612=static bool metapage_release_folio(struct folio *folio, gfp_t gfp_mask)
--
fs/jfs/jfs_metapage.c-633- remove_from_logsync(mp);
fs/jfs/jfs_metapage.c:634: remove_metapage(folio, mp);
fs/jfs/jfs_metapage.c-635- INCREMENT(mpStat.pagefree);
] |
| 291/4 | 2026/03/16 16:28 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 14026 output: 22 thoughts: 30 Thoughts: Okay, here's my interpretation of reading that code snippet, presented as a first-person account: **Diving into the JFS Metapage Code** Alright, let's take a look at `fs/jfs/jfs_metapage.c`, focusing specifically on lines 84 through 130. I'm going to start by getting a sense of what's happening in this section of code. I'm assuming, given the filename and the line range, that we're dealing with metapage management within the Journaled File System (JFS). I want to understand how it's handling the details of metapage allocation, modification, and potential synchronization. First, I expect to see function declarations, data structure definitions, or maybe some key macro definitions related to metapage handling, like the metadata that is stored. I'll be looking for clues on how the system determines when a metapage needs to be flushed to disk, any locking mechanisms used to protect access to the metapage, and how it handles potential errors. I'll need to pay attention to how this function interacts with the overall journaling process. I'll be scanning the code, watching for variable declarations, conditional statements, and loops that seem to control the lifecycle of the metapage, as well as where data is read from and written to the metapage itself. Given this is JFS, I'm anticipating that these actions are crucial to preserving data integrity and handling journaling correctly. I need to note any unusual or unexpected behaviors, that might impact performance. |
| 292/4 | 2026/03/16 16:28 | tool | grepper |
0mArgs:map[Expression:static inline int insert_metapage] Results: map[Output:fs/jfs/jfs_metapage.c=84=static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
--
fs/jfs/jfs_metapage.c-92-
fs/jfs/jfs_metapage.c:93:static inline int insert_metapage(struct folio *folio, struct metapage *mp)
fs/jfs/jfs_metapage.c-94-{
--
fs/jfs/jfs_metapage.c=208=static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
--
fs/jfs/jfs_metapage.c-212-
fs/jfs/jfs_metapage.c:213:static inline int insert_metapage(struct folio *folio, struct metapage *mp)
fs/jfs/jfs_metapage.c-214-{
] |
| 293/4 | 2026/03/16 16:28 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 14651 output: 47 thoughts: 30 Thoughts: Okay, here's my interpretation of reading that code snippet, presented as a first-person account: **Diving into the JFS Metapage Code** Alright, let's take a look at `fs/jfs/jfs_metapage.c`, focusing specifically on lines 84 through 130. I'm going to start by getting a sense of what's happening in this section of code. I'm assuming, given the filename and the line range, that we're dealing with metapage management within the Journaled File System (JFS). I want to understand how it's handling the details of metapage allocation, modification, and potential synchronization. First, I expect to see function declarations, data structure definitions, or maybe some key macro definitions related to metapage handling, like the metadata that is stored. I'll be looking for clues on how the system determines when a metapage needs to be flushed to disk, any locking mechanisms used to protect access to the metapage, and how it handles potential errors. I'll need to pay attention to how this function interacts with the overall journaling process. I'll be scanning the code, watching for variable declarations, conditional statements, and loops that seem to control the lifecycle of the metapage, as well as where data is read from and written to the metapage itself. Given this is JFS, I'm anticipating that these actions are crucial to preserving data integrity and handling journaling correctly. I need to note any unusual or unexpected behaviors, that might impact performance. |
| 294/4 | 2026/03/16 16:28 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:insert_metapage] Results: map[SourceCode: 213: static inline int insert_metapage(struct folio *folio, struct metapage *mp)
214: {
215: if (mp) {
216: folio_attach_private(folio, mp);
217: kmap(&folio->page);
218: }
219: return 0;
220: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 295/4 | 2026/03/16 16:28 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 15196 output: 28 thoughts: 24 Thoughts: Here's a summary of my thought process: **Deeper Dive Required** Okay, so `codesearch-definition-source` is only returning a single definition. That's a good start, but I know there's more out there. I need to get the complete picture, and relying on just one source is insufficient, especially when dealing with nuanced concepts. Time to pivot and find that second definition. |
| 296/4 | 2026/03/16 16:28 | tool | grepper |
0mError:no matches Args: map[Expression:static inline int insert_metapage.*-A 20] Results: map[Output:] |
| 297/4 | 2026/03/16 16:28 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 15349 output: 28 thoughts: 38 Thoughts: Here's my analysis of the provided task, assuming I'm an expert in this field and the user is too: **Diving into JFS Metapage Code** Okay, so the goal is to examine a specific section of the JFS (Journaled File System) metapage code. The direction is clear: I need to use the `read-file` utility or function to directly access the source code. The specific file to target is `fs/jfs/jfs_metapage.c`, and the relevant lines for analysis are from 80 through 130. I'm immediately thinking, "This is likely where some critical low-level metadata handling resides." I'll be looking for things like how the metapage structures are defined, how they're read and written to disk, and any potential locking or synchronization mechanisms involved. Based on the file name, I'd expect to see functions related to managing the super block, inode allocation bitmap, journal log, or potentially block allocation, depending on JFS's specific implementation choices. I'm keen to see if this is related to any known performance bottlenecks or potential race conditions. Time to get to it. |
| 298/4 | 2026/03/16 16:28 | tool | read-file |
0mArgs:map[File:fs/jfs/jfs_metapage.c] Results: map[Contents:// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright (C) International Business Machines Corp., 2000-2005
* Portions Copyright (C) Christoph Hellwig, 2001-2002
*/
#include <linux/blkdev.h>
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/bio.h>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/buffer_head.h>
#include <linux/mempool.h>
#include <linux/seq_file.h>
#include <linux/writeback.h>
#include <linux/migrate.h>
#include "jfs_incore.h"
#include "jfs_superblock.h"
#include "jfs_filsys.h"
#include "jfs_metapage.h"
#include "jfs_txnmgr.h"
#include "jfs_debug.h"
#ifdef CONFIG_JFS_STATISTICS
static struct {
uint pagealloc; /* # of page allocations */
uint pagefree; /* # of page frees */
uint lockwait; /* # of sleeping lock_metapage() calls */
} mpStat;
#endif
#define metapage_locked(mp) test_bit(META_locked, &(mp)->flag)
#define trylock_metapage(mp) test_and_set_bit_lock(META_locked, &(mp)->flag)
static inline void unlock_metapage(struct metapage *mp)
{
clear_bit_unlock(META_locked, &mp->flag);
wake_up(&mp->wait);
}
static inline void __lock_metapage(struct metapage *mp)
{
DECLARE_WAITQUEUE(wait, current);
INCREMENT(mpStat.lockwait);
add_wait_queue_exclusive(&mp->wait, &wait);
do {
set_current_state(TASK_UNINTERRUPTIBLE);
if (metapage_locked(mp)) {
folio_unlock(mp->folio);
io_schedule();
folio_lock(mp->folio);
}
} while (trylock_metapage(mp));
__set_current_state(TASK_RUNNING);
remove_wait_queue(&mp->wait, &wait);
}
/*
* Must have mp->folio locked
*/
static inline void lock_metapage(struct metapage *mp)
{
if (trylock_metapage(mp))
__lock_metapage(mp);
}
#define METAPOOL_MIN_PAGES 32
static struct kmem_cache *metapage_cache;
static mempool_t *metapage_mempool;
#define MPS_PER_PAGE (PAGE_SIZE >> L2PSIZE)
#if MPS_PER_PAGE > 1
struct meta_anchor {
int mp_count;
atomic_t io_count;
blk_status_t status;
struct metapage *mp[MPS_PER_PAGE];
};
static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
{
struct meta_anchor *anchor = folio->private;
if (!anchor)
return NULL;
return anchor->mp[offset >> L2PSIZE];
}
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a;
int index;
int l2mp_blocks; /* log2 blocks per metapage */
a = folio->private;
if (!a) {
a = kzalloc_obj(struct meta_anchor, GFP_NOFS);
if (!a)
return -ENOMEM;
folio_attach_private(folio, a);
kmap(&folio->page);
}
if (mp) {
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
a->mp_count++;
a->mp[index] = mp;
}
return 0;
}
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a = folio->private;
int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
int index;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
BUG_ON(a->mp[index] != mp);
a->mp[index] = NULL;
if (--a->mp_count == 0) {
kfree(a);
folio_detach_private(folio);
kunmap(&folio->page);
}
}
static inline void inc_io(struct folio *folio)
{
struct meta_anchor *anchor = folio->private;
atomic_inc(&anchor->io_count);
}
static inline void dec_io(struct folio *folio, blk_status_t status,
void (*handler)(struct folio *, blk_status_t))
{
struct meta_anchor *anchor = folio->private;
if (anchor->status == BLK_STS_OK)
anchor->status = status;
if (atomic_dec_and_test(&anchor->io_count))
handler(folio, anchor->status);
}
#ifdef CONFIG_MIGRATION
static int __metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
struct meta_anchor *src_anchor = src->private;
struct metapage *mps[MPS_PER_PAGE] = {0};
struct metapage *mp;
int i, rc;
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = src_anchor->mp[i];
if (mp && metapage_locked(mp))
return -EAGAIN;
}
rc = filemap_migrate_folio(mapping, dst, src, mode);
if (rc)
return rc;
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = src_anchor->mp[i];
if (!mp)
continue;
if (unlikely(insert_metapage(dst, mp))) {
/* If error, roll-back previosly inserted pages */
for (int j = 0 ; j < i; j++) {
if (mps[j])
remove_metapage(dst, mps[j]);
}
return -EAGAIN;
}
mps[i] = mp;
}
/* Update the metapage and remove it from src */
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = mps[i];
if (mp) {
int page_offset = mp->data - folio_address(src);
mp->data = folio_address(dst) + page_offset;
mp->folio = dst;
remove_metapage(src, mp);
}
}
return 0;
}
#endif /* CONFIG_MIGRATION */
#else
static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
{
return folio->private;
}
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
if (mp) {
folio_attach_private(folio, mp);
kmap(&folio->page);
}
return 0;
}
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
folio_detach_private(folio);
kunmap(&folio->page);
}
#define inc_io(folio) do {} while(0)
#define dec_io(folio, status, handler) handler(folio, status)
#ifdef CONFIG_MIGRATION
static int __metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
struct metapage *mp;
int page_offset;
int rc;
mp = folio_to_mp(src, 0);
if (metapage_locked(mp))
return -EAGAIN;
rc = filemap_migrate_folio(mapping, dst, src, mode);
if (rc)
return rc;
if (unlikely(insert_metapage(dst, mp)))
return -EAGAIN;
page_offset = mp->data - folio_address(src);
mp->data = folio_address(dst) + page_offset;
mp->folio = dst;
remove_metapage(src, mp);
return 0;
}
#endif /* CONFIG_MIGRATION */
#endif
static inline struct metapage *alloc_metapage(gfp_t gfp_mask)
{
struct metapage *mp = mempool_alloc(metapage_mempool, gfp_mask);
if (mp) {
mp->lid = 0;
mp->lsn = 0;
mp->data = NULL;
mp->clsn = 0;
mp->log = NULL;
init_waitqueue_head(&mp->wait);
}
return mp;
}
static inline void free_metapage(struct metapage *mp)
{
mempool_free(mp, metapage_mempool);
}
int __init metapage_init(void)
{
/*
* Allocate the metapage structures
*/
metapage_cache = kmem_cache_create("jfs_mp", sizeof(struct metapage),
0, 0, NULL);
if (metapage_cache == NULL)
return -ENOMEM;
metapage_mempool = mempool_create_slab_pool(METAPOOL_MIN_PAGES,
metapage_cache);
if (metapage_mempool == NULL) {
kmem_cache_destroy(metapage_cache);
return -ENOMEM;
}
return 0;
}
void metapage_exit(void)
{
mempool_destroy(metapage_mempool);
kmem_cache_destroy(metapage_cache);
}
static inline void drop_metapage(struct folio *folio, struct metapage *mp)
{
if (mp->count || mp->nohomeok || test_bit(META_dirty, &mp->flag) ||
test_bit(META_io, &mp->flag))
return;
remove_metapage(folio, mp);
INCREMENT(mpStat.pagefree);
free_metapage(mp);
}
/*
* Metapage address space operations
*/
static sector_t metapage_get_blocks(struct inode *inode, sector_t lblock,
int *len)
{
int rc = 0;
int xflag;
s64 xaddr;
sector_t file_blocks = (inode->i_size + inode->i_sb->s_blocksize - 1) >>
inode->i_blkbits;
if (lblock >= file_blocks)
return 0;
if (lblock + *len > file_blocks)
*len = file_blocks - lblock;
if (inode->i_ino) {
rc = xtLookup(inode, (s64)lblock, *len, &xflag, &xaddr, len, 0);
if ((rc == 0) && *len)
lblock = (sector_t)xaddr;
else
lblock = 0;
} /* else no mapping */
return lblock;
}
static void last_read_complete(struct folio *folio, blk_status_t status)
{
if (status)
printk(KERN_ERR "Read error %d at %#llx\n", status,
folio_pos(folio));
folio_end_read(folio, status == 0);
}
static void metapage_read_end_io(struct bio *bio)
{
struct folio *folio = bio->bi_private;
dec_io(folio, bio->bi_status, last_read_complete);
bio_put(bio);
}
static void remove_from_logsync(struct metapage *mp)
{
struct jfs_log *log = mp->log;
unsigned long flags;
/*
* This can race. Recheck that log hasn't been set to null, and after
* acquiring logsync lock, recheck lsn
*/
if (!log)
return;
LOGSYNC_LOCK(log, flags);
if (mp->lsn) {
mp->log = NULL;
mp->lsn = 0;
mp->clsn = 0;
log->count--;
list_del(&mp->synclist);
}
LOGSYNC_UNLOCK(log, flags);
}
static void last_write_complete(struct folio *folio, blk_status_t status)
{
struct metapage *mp;
unsigned int offset;
if (status) {
int err = blk_status_to_errno(status);
printk(KERN_ERR "metapage_write_end_io: I/O error\n");
mapping_set_error(folio->mapping, err);
}
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (mp && test_bit(META_io, &mp->flag)) {
if (mp->lsn)
remove_from_logsync(mp);
clear_bit(META_io, &mp->flag);
}
/*
* I'd like to call drop_metapage here, but I don't think it's
* safe unless I have the page locked
*/
}
folio_end_writeback(folio);
}
static void metapage_write_end_io(struct bio *bio)
{
struct folio *folio = bio->bi_private;
BUG_ON(!folio->private);
dec_io(folio, bio->bi_status, last_write_complete);
bio_put(bio);
}
static int metapage_write_folio(struct folio *folio,
struct writeback_control *wbc)
{
struct bio *bio = NULL;
int block_offset; /* block offset of mp within page */
struct inode *inode = folio->mapping->host;
int blocks_per_mp = JFS_SBI(inode->i_sb)->nbperpage;
int len;
int xlen;
struct metapage *mp;
int redirty = 0;
sector_t lblock;
int nr_underway = 0;
sector_t pblock;
sector_t next_block = 0;
sector_t page_start;
unsigned long bio_bytes = 0;
unsigned long bio_offset = 0;
int offset;
int bad_blocks = 0;
page_start = folio_pos(folio) >> inode->i_blkbits;
BUG_ON(!folio_test_locked(folio));
BUG_ON(folio_test_writeback(folio));
folio_start_writeback(folio);
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp || !test_bit(META_dirty, &mp->flag))
continue;
if (mp->nohomeok && !test_bit(META_forcewrite, &mp->flag)) {
redirty = 1;
/*
* Make sure this page isn't blocked indefinitely.
* If the journal isn't undergoing I/O, push it
*/
if (mp->log && !(mp->log->cflag & logGC_PAGEOUT))
jfs_flush_journal(mp->log, 0);
continue;
}
clear_bit(META_dirty, &mp->flag);
set_bit(META_io, &mp->flag);
block_offset = offset >> inode->i_blkbits;
lblock = page_start + block_offset;
if (bio) {
if (xlen && lblock == next_block) {
/* Contiguous, in memory & on disk */
len = min(xlen, blocks_per_mp);
xlen -= len;
bio_bytes += len << inode->i_blkbits;
continue;
}
/* Not contiguous */
bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
/*
* Increment counter before submitting i/o to keep
* count from hitting zero before we're through
*/
inc_io(folio);
if (!bio->bi_iter.bi_size)
goto dump_bio;
submit_bio(bio);
nr_underway++;
bio = NULL;
} else
inc_io(folio);
xlen = (folio_size(folio) - offset) >> inode->i_blkbits;
pblock = metapage_get_blocks(inode, lblock, &xlen);
if (!pblock) {
printk(KERN_ERR "JFS: metapage_get_blocks failed\n");
/*
* We already called inc_io(), but can't cancel it
* with dec_io() until we're done with the page
*/
bad_blocks++;
continue;
}
len = min(xlen, (int)JFS_SBI(inode->i_sb)->nbperpage);
bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_WRITE, GFP_NOFS);
bio->bi_iter.bi_sector = pblock << (inode->i_blkbits - 9);
bio->bi_end_io = metapage_write_end_io;
bio->bi_private = folio;
/* Don't call bio_add_page yet, we may add to this vec */
bio_offset = offset;
bio_bytes = len << inode->i_blkbits;
xlen -= len;
next_block = lblock + len;
}
if (bio) {
bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
if (!bio->bi_iter.bi_size)
goto dump_bio;
submit_bio(bio);
nr_underway++;
}
if (redirty)
folio_redirty_for_writepage(wbc, folio);
folio_unlock(folio);
if (bad_blocks)
goto err_out;
if (nr_underway == 0)
folio_end_writeback(folio);
return 0;
dump_bio:
print_hex_dump(KERN_ERR, "JFS: dump of bio: ", DUMP_PREFIX_ADDRESS, 16,
4, bio, sizeof(*bio), 0);
bio_put(bio);
folio_unlock(folio);
dec_io(folio, BLK_STS_OK, last_write_complete);
err_out:
while (bad_blocks--)
dec_io(folio, BLK_STS_OK, last_write_complete);
return -EIO;
}
static int metapage_writepages(struct address_space *mapping,
struct writeback_control *wbc)
{
struct blk_plug plug;
struct folio *folio = NULL;
int err;
blk_start_plug(&plug);
while ((folio = writeback_iter(mapping, wbc, folio, &err)))
err = metapage_write_folio(folio, wbc);
blk_finish_plug(&plug);
return err;
}
static int metapage_read_folio(struct file *fp, struct folio *folio)
{
struct inode *inode = folio->mapping->host;
struct bio *bio = NULL;
int block_offset;
int blocks_per_page = i_blocks_per_folio(inode, folio);
sector_t page_start; /* address of page in fs blocks */
sector_t pblock;
int xlen;
unsigned int len;
int offset;
BUG_ON(!folio_test_locked(folio));
page_start = folio_pos(folio) >> inode->i_blkbits;
block_offset = 0;
while (block_offset < blocks_per_page) {
xlen = blocks_per_page - block_offset;
pblock = metapage_get_blocks(inode, page_start + block_offset,
&xlen);
if (pblock) {
if (!folio->private)
insert_metapage(folio, NULL);
inc_io(folio);
if (bio)
submit_bio(bio);
bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_READ,
GFP_NOFS);
bio->bi_iter.bi_sector =
pblock << (inode->i_blkbits - 9);
bio->bi_end_io = metapage_read_end_io;
bio->bi_private = folio;
len = xlen << inode->i_blkbits;
offset = block_offset << inode->i_blkbits;
bio_add_folio_nofail(bio, folio, len, offset);
block_offset += xlen;
} else
block_offset++;
}
if (bio)
submit_bio(bio);
else
folio_unlock(folio);
return 0;
}
static bool metapage_release_folio(struct folio *folio, gfp_t gfp_mask)
{
struct metapage *mp;
bool ret = true;
int offset;
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp)
continue;
jfs_info("metapage_release_folio: mp = 0x%p", mp);
if (mp->count || mp->nohomeok ||
test_bit(META_dirty, &mp->flag)) {
jfs_info("count = %ld, nohomeok = %d", mp->count,
mp->nohomeok);
ret = false;
continue;
}
if (mp->lsn)
remove_from_logsync(mp);
remove_metapage(folio, mp);
INCREMENT(mpStat.pagefree);
free_metapage(mp);
}
return ret;
}
#ifdef CONFIG_MIGRATION
/*
* metapage_migrate_folio - Migration function for JFS metapages
*/
static int metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
int expected_count;
if (!src->private)
return filemap_migrate_folio(mapping, dst, src, mode);
/* Check whether page does not have extra refs before we do more work */
expected_count = folio_expected_ref_count(src) + 1;
if (folio_ref_count(src) != expected_count)
return -EAGAIN;
return __metapage_migrate_folio(mapping, dst, src, mode);
}
#else
#define metapage_migrate_folio NULL
#endif /* CONFIG_MIGRATION */
static void metapage_invalidate_folio(struct folio *folio, size_t offset,
size_t length)
{
BUG_ON(offset || length < folio_size(folio));
BUG_ON(folio_test_writeback(folio));
metapage_release_folio(folio, 0);
}
const struct address_space_operations jfs_metapage_aops = {
.read_folio = metapage_read_folio,
.writepages = metapage_writepages,
.release_folio = metapage_release_folio,
.invalidate_folio = metapage_invalidate_folio,
.dirty_folio = filemap_dirty_folio,
.migrate_folio = metapage_migrate_folio,
};
struct metapage *__get_metapage(struct inode *inode, unsigned long lblock,
unsigned int size, int absolute,
unsigned long new)
{
int l2BlocksPerPage;
int l2bsize;
struct address_space *mapping;
struct metapage *mp = NULL;
struct folio *folio;
unsigned long page_index;
unsigned long page_offset;
jfs_info("__get_metapage: ino = %ld, lblock = 0x%lx, abs=%d",
inode->i_ino, lblock, absolute);
l2bsize = inode->i_blkbits;
l2BlocksPerPage = PAGE_SHIFT - l2bsize;
page_index = lblock >> l2BlocksPerPage;
page_offset = (lblock - (page_index << l2BlocksPerPage)) << l2bsize;
if ((page_offset + size) > PAGE_SIZE) {
jfs_err("MetaData crosses page boundary!!");
jfs_err("lblock = %lx, size = %d", lblock, size);
dump_stack();
return NULL;
}
if (absolute)
mapping = JFS_SBI(inode->i_sb)->direct_inode->i_mapping;
else {
/*
* If an nfs client tries to read an inode that is larger
* than any existing inodes, we may try to read past the
* end of the inode map
*/
if ((lblock << inode->i_blkbits) >= inode->i_size)
return NULL;
mapping = inode->i_mapping;
}
if (new && (PSIZE == PAGE_SIZE)) {
folio = filemap_grab_folio(mapping, page_index);
if (IS_ERR(folio)) {
jfs_err("filemap_grab_folio failed!");
return NULL;
}
folio_mark_uptodate(folio);
} else {
folio = read_mapping_folio(mapping, page_index, NULL);
if (IS_ERR(folio)) {
jfs_err("read_mapping_page failed!");
return NULL;
}
folio_lock(folio);
}
mp = folio_to_mp(folio, page_offset);
if (mp) {
if (mp->logical_size != size) {
jfs_error(inode->i_sb,
"get_mp->logical_size != size\n");
jfs_err("logical_size = %d, size = %d",
mp->logical_size, size);
dump_stack();
goto unlock;
}
mp->count++;
lock_metapage(mp);
if (test_bit(META_discard, &mp->flag)) {
if (!new) {
jfs_error(inode->i_sb,
"using a discarded metapage\n");
discard_metapage(mp);
goto unlock;
}
clear_bit(META_discard, &mp->flag);
}
} else {
INCREMENT(mpStat.pagealloc);
mp = alloc_metapage(GFP_NOFS);
if (!mp)
goto unlock;
mp->folio = folio;
mp->sb = inode->i_sb;
mp->flag = 0;
mp->xflag = COMMIT_PAGE;
mp->count = 1;
mp->nohomeok = 0;
mp->logical_size = size;
mp->data = folio_address(folio) + page_offset;
mp->index = lblock;
if (unlikely(insert_metapage(folio, mp))) {
free_metapage(mp);
goto unlock;
}
lock_metapage(mp);
}
if (new) {
jfs_info("zeroing mp = 0x%p", mp);
memset(mp->data, 0, PSIZE);
}
folio_unlock(folio);
jfs_info("__get_metapage: returning = 0x%p data = 0x%p", mp, mp->data);
return mp;
unlock:
folio_unlock(folio);
return NULL;
}
void grab_metapage(struct metapage * mp)
{
jfs_info("grab_metapage: mp = 0x%p", mp);
folio_get(mp->folio);
folio_lock(mp->folio);
mp->count++;
lock_metapage(mp);
folio_unlock(mp->folio);
}
static int metapage_write_one(struct folio *folio)
{
struct address_space *mapping = folio->mapping;
struct writeback_control wbc = {
.sync_mode = WB_SYNC_ALL,
.nr_to_write = folio_nr_pages(folio),
};
int ret = 0;
BUG_ON(!folio_test_locked(folio));
folio_wait_writeback(folio);
if (folio_clear_dirty_for_io(folio)) {
folio_get(folio);
ret = metapage_write_folio(folio, &wbc);
if (ret == 0)
folio_wait_writeback(folio);
folio_put(folio);
} else {
folio_unlock(folio);
}
if (!ret)
ret = filemap_check_errors(mapping);
return ret;
}
void force_metapage(struct metapage *mp)
{
struct folio *folio = mp->folio;
jfs_info("force_metapage: mp = 0x%p", mp);
set_bit(META_forcewrite, &mp->flag);
clear_bit(META_sync, &mp->flag);
folio_get(folio);
folio_lock(folio);
folio_mark_dirty(folio);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
clear_bit(META_forcewrite, &mp->flag);
folio_put(folio);
}
void hold_metapage(struct metapage *mp)
{
folio_lock(mp->folio);
}
void put_metapage(struct metapage *mp)
{
if (mp->count || mp->nohomeok) {
/* Someone else will release this */
folio_unlock(mp->folio);
return;
}
folio_get(mp->folio);
mp->count++;
lock_metapage(mp);
folio_unlock(mp->folio);
release_metapage(mp);
}
void release_metapage(struct metapage * mp)
{
struct folio *folio = mp->folio;
jfs_info("release_metapage: mp = 0x%p, flag = 0x%lx", mp, mp->flag);
folio_lock(folio);
unlock_metapage(mp);
assert(mp->count);
if (--mp->count || mp->nohomeok) {
folio_unlock(folio);
folio_put(folio);
return;
}
if (test_bit(META_dirty, &mp->flag)) {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
} else if (mp->lsn) /* discard_metapage doesn't remove it */
remove_from_logsync(mp);
/* Try to keep metapages from using up too much memory */
drop_metapage(folio, mp);
folio_unlock(folio);
folio_put(folio);
}
void __invalidate_metapages(struct inode *ip, s64 addr, int len)
{
sector_t lblock;
int l2BlocksPerPage = PAGE_SHIFT - ip->i_blkbits;
int BlocksPerPage = 1 << l2BlocksPerPage;
/* All callers are interested in block device's mapping */
struct address_space *mapping =
JFS_SBI(ip->i_sb)->direct_inode->i_mapping;
struct metapage *mp;
unsigned int offset;
/*
* Mark metapages to discard. They will eventually be
* released, but should not be written.
*/
for (lblock = addr & ~(BlocksPerPage - 1); lblock < addr + len;
lblock += BlocksPerPage) {
struct folio *folio = filemap_lock_folio(mapping,
lblock >> l2BlocksPerPage);
if (IS_ERR(folio))
continue;
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp)
continue;
if (mp->index < addr)
continue;
if (mp->index >= addr + len)
break;
clear_bit(META_dirty, &mp->flag);
set_bit(META_discard, &mp->flag);
if (mp->lsn)
remove_from_logsync(mp);
}
folio_unlock(folio);
folio_put(folio);
}
}
#ifdef CONFIG_JFS_STATISTICS
int jfs_mpstat_proc_show(struct seq_file *m, void *v)
{
seq_printf(m,
"JFS Metapage statistics\n"
"=======================\n"
"page allocations = %d\n"
"page frees = %d\n"
"lock waits = %d\n",
mpStat.pagealloc,
mpStat.pagefree,
mpStat.lockwait);
return 0;
}
#endif
] |
| 299/4 | 2026/03/16 16:28 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 24249 output: 196 thoughts: 733 Thoughts: **Analyzing Metapage Insertion and Removal with Large Pages**
Alright, let's dive into these `insert_metapage` and `remove_metapage` functions, specifically when `MPS_PER_PAGE` is greater than 1, implying large page sizes exceeding 4KB. My initial thought is to scrutinize the block size calculations within these routines, focusing on potential issues that might arise when dealing with folios and their associated mappings.
My primary concern centers around the possibility of `folio->mapping` being NULL during the execution of either function. If this occurs, accessing `folio->mapping->host->i_blkbits` will lead to a null pointer dereference, which is, of course, a critical bug. My solution is to use `mp->sb->s_blocksize_bits` instead of relying on `folio->mapping->host->i_blkbits`, since `mp->sb->s_blocksize_bits` is always accessible from the metapage's superblock.
However, I need to be careful with `insert_metapage` since `mp` can be NULL! In the current code:
```c
if (mp) {
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
```
When `mp` is NULL, the code doesn't dereference `folio->mapping->host->i_blkbits`, and this avoids a null pointer dereference.
When `mp` is not NULL, we can use `mp->sb->s_blocksize_bits`.
Let's look at `remove_metapage`:
```c
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a = folio->private;
int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
```
Here, because `mp` is not NULL, since it's used to calculate the index, we can safely use `mp->sb->s_blocksize_bits` instead of `folio->mapping->host->i_blkbits`.
Therefore, I'll revise the code to employ `mp->sb->s_blocksize_bits` in both functions whenever `mp` is not NULL, and ensure the appropriate handling of the NULL `mp` case. This approach will ensure a robust and safer implementation for metapage management, even under the described edge conditions.
|
| 300/4 | 2026/03/16 16:28 | tool | codeeditor |
0mArgs:map[CurrentCode: if (mp) {
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
a->mp_count++;
a->mp[index] = mp;
} NewCode: if (mp) {
l2mp_blocks = L2PSIZE - mp->sb->s_blocksize_bits;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
a->mp_count++;
a->mp[index] = mp;
} SourceFile:fs/jfs/jfs_metapage.c]Results: map[] |
| 301/4 | 2026/03/16 16:28 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 25719 output: 222 thoughts: 8 Thoughts: Okay, here's my attempt at summarizing the thought process, tailored for an expert audience, and written in the first person: **Now Removing the Metapage** Right, so we're at the point of needing to remove the metapage. I've been considering this for a while now, and the implications of this action are significant. The core functionality that depends on that meta page needs to be re-evaluated to ensure it all works as intended. First, I'm thinking about the downstream effects. What modules or processes directly call that data? I need to build a comprehensive dependency graph to make sure I don't introduce any regressions. I need to know exactly which components are at risk if that data source goes away. I know there are likely direct calls in a few places, and I'm prepared to address those. But I suspect there are also indirect dependencies, via other functions. Those are the ones I really need to track down. Next, it's about the replacement strategy. What will take the meta page's place? Is it a database query, a different data structure, or an entirely new service? I need to evaluate the performance implications of each approach. What about scalability? We can't introduce a bottleneck now, or at any point in the future. I'll need to benchmark any potential alternatives under load to ensure they can handle our expected traffic and future growth. Error handling is critical. How will we handle cases where the replacement fails? Are there fallback mechanisms? What kind of logging and monitoring do we need to put in place to quickly identify and address issues when they inevitably arise? I can't just remove the metapage and hope for the best. I need to put in place robust error management to mitigate any unintended impacts on the system. Finally, I need to consider the testing strategy. How will we ensure that everything works correctly after the removal? Will the tests we have be sufficient? Are there new test cases we need to create to specifically target the affected components? What about integration tests? Regression tests? It has to be a comprehensive test plan, focusing on not only the core functionality, but also its potential failure modes. |
| 302/4 | 2026/03/16 16:29 | tool | codeeditor |
0mArgs:map[CurrentCode:static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a = folio->private;
int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
int index;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1); NewCode:static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a = folio->private;
int l2mp_blocks = L2PSIZE - mp->sb->s_blocksize_bits;
int index;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1); SourceFile:fs/jfs/jfs_metapage.c]Results: map[] |
| 303/4 | 2026/03/16 16:29 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 26405 output: 47 thoughts: 13 Thoughts: Okay, let's dive into this. Here's how I'd approach summarizing the "thoughts" section: **Analyzing `release_metapage`** Alright, so the instruction is to examine something called `release_metapage`. This immediately piques my interest. As an expert, I'm already formulating questions: What context is this in? Is it a data structure, a function, a module, a webpage, or something else entirely? The name itself suggests that it's related to some release process and metadata, probably providing information about a particular version or deployment. I'll need more information to really dig in, but my initial thought is to analyze this in the context of versioning, deployment, and perhaps even lifecycle management. Is this just about the data? Or the process of generating/accessing the data? Depending on the specifics, it could be a critical piece in the overall workflow. Let's see what's involved! |
| 304/4 | 2026/03/16 16:29 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:release_metapage] Results: map[SourceCode: 865: void release_metapage(struct metapage * mp)
866: {
867: struct folio *folio = mp->folio;
868: jfs_info("release_metapage: mp = 0x%p, flag = 0x%lx", mp, mp->flag);
869:
870: folio_lock(folio);
871: unlock_metapage(mp);
872:
873: assert(mp->count);
874: if (--mp->count || mp->nohomeok) {
875: folio_unlock(folio);
876: folio_put(folio);
877: return;
878: }
879:
880: if (test_bit(META_dirty, &mp->flag)) {
881: folio_mark_dirty(folio);
882: if (test_bit(META_sync, &mp->flag)) {
883: clear_bit(META_sync, &mp->flag);
884: if (metapage_write_one(folio))
885: jfs_error(mp->sb, "metapage_write_one() failed\n");
886: folio_lock(folio);
887: }
888: } else if (mp->lsn) /* discard_metapage doesn't remove it */
889: remove_from_logsync(mp);
890:
891: /* Try to keep metapages from using up too much memory */
892: drop_metapage(folio, mp);
893:
894: folio_unlock(folio);
895: folio_put(folio);
896: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 305/4 | 2026/03/16 16:29 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 27191 output: 395 thoughts: 238 Thoughts: **Analyzing the Proposed `release_metapage` Fix** Okay, so I'm looking at this suggested modification for the `release_metapage` function. My initial impression is that it seems like a pretty focused update, specifically dealing with the handling of metapages and their associated states. The code's logic is pretty straightforward, which is what I'm looking for. I see it's checking if the `folio->mapping` exists. If so, and the metapage is dirty, it marks the folio dirty, and if the metapage also needs to be synced, it clears the sync flag, attempts to write the metapage, and then acquires a lock on the folio. It makes sense, keep things clean and synchronous. A failure in `metapage_write_one` triggers an error. If the metapage isn't dirty, but it has an LSN, then we remove it from the logsync structure. If `folio->mapping` doesn't exist (else branch), we just need to clear any dirty or sync flags and, if an LSN exists, remove it from logsync. Essentially, ensuring that we're properly cleaning up a metapage when it's not actually associated with a valid mapping. I've seen that behavior before, nothing unexpected. Overall, it looks like a reasonable fix. The conditions are handled in a structured and safe way, covering the dirty state and the necessity to actually sync the modified data to disk. The use of clear_bit is appropriate for updating flags, and calling remove_from_logsync when necessary makes perfect sense for ensuring data integrity and consistency. I’m comfortable with this change. It's concise and seems to correctly address the potential issues in the original function. Let's incorporate it. |
| 306/4 | 2026/03/16 16:29 | tool | codeeditor |
0mArgs:map[CurrentCode: if (test_bit(META_dirty, &mp->flag)) {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
} else if (mp->lsn) /* discard_metapage doesn't remove it */
remove_from_logsync(mp); NewCode: if (folio->mapping) {
if (test_bit(META_dirty, &mp->flag)) {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
} else if (mp->lsn) /* discard_metapage doesn't remove it */
remove_from_logsync(mp);
} else {
clear_bit(META_dirty, &mp->flag);
clear_bit(META_sync, &mp->flag);
if (mp->lsn)
remove_from_logsync(mp);
} SourceFile:fs/jfs/jfs_metapage.c]Results: map[] |
| 307/4 | 2026/03/16 16:29 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 28204 output: 47 thoughts: 13 Thoughts: Okay, let's dive into this `force_metapage` concept. As an expert, I'm already anticipating the nuances and potential implications here. My initial thought is to immediately consider how this function impacts the underlying architecture and its intended use cases. I'm thinking about the potential trade-offs – is this a performance optimization, a data integrity measure, or perhaps a method to manage concurrency? I need to understand its relationship to existing metapage handling. Is this overriding a default behavior? Is it triggered under specific conditions? What are those conditions? Are there any potential risks associated with forcing a metapage update or creation? What happens if this force clashes with any concurrent processes or locks? What edge cases could it expose? Furthermore, I’m mentally mapping this to the overall system design. Does this function interact with caching layers? How does it relate to persistence mechanisms? Is there any chance of data corruption or inconsistency if used improperly? Are there any potential security implications if this feature is exposed without the proper safeguards? I'm already formulating questions in my mind about its error handling and logging strategies. I'm keen to understand the impact of this on my team's operations, how it could affect debugging processes and the overall system stability. |
| 308/4 | 2026/03/16 16:29 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:force_metapage] Results: map[SourceCode: 831: void force_metapage(struct metapage *mp)
832: {
833: struct folio *folio = mp->folio;
834: jfs_info("force_metapage: mp = 0x%p", mp);
835: set_bit(META_forcewrite, &mp->flag);
836: clear_bit(META_sync, &mp->flag);
837: folio_get(folio);
838: folio_lock(folio);
839: folio_mark_dirty(folio);
840: if (metapage_write_one(folio))
841: jfs_error(mp->sb, "metapage_write_one() failed\n");
842: clear_bit(META_forcewrite, &mp->flag);
843: folio_put(folio);
844: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 309/4 | 2026/03/16 16:29 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 28801 output: 222 thoughts: 138 Thoughts: **Analyzing the Force Metapage Fix** Okay, so I'm looking at this proposed fix for `force_metapage`. It seems straightforward enough, but let's break it down. First, we're locking the folio—that's good, protects against race conditions. Then we check if the folio has a mapping. If it does, we mark it dirty, meaning we've got changes to write. Next, we call `metapage_write_one()`. If that function fails, we log an error – crucial for debugging, of course. Now, the interesting part is the `else` block. If the folio *doesn't* have a mapping, we clear the `META_dirty` flag and unlock the folio. Finally, we clear the `META_forcewrite` flag regardless. This seems sensible; it means we're done forcing the write. The key considerations here are atomicity and data consistency. By locking the folio, we're ensuring that the write operations are synchronized. Marking the folio dirty indicates that the data needs to be flushed. The `metapage_write_one()` function is likely the one doing the actual work of writing the metadata to disk. The `else` block seems to handle scenarios where the folio isn't actively mapped, which is perfectly reasonable. Clearing the `META_forcewrite` flag is the final step, ensuring this particular force write attempt is not repeated unnecessarily. This all looks like a robust and well-thought-out solution to address whatever problem this addresses. Let's apply it and see how it performs under stress testing. |
| 310/4 | 2026/03/16 16:29 | tool | codeeditor |
0mArgs:map[CurrentCode: folio_lock(folio);
folio_mark_dirty(folio);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
clear_bit(META_forcewrite, &mp->flag); NewCode: folio_lock(folio);
if (folio->mapping) {
folio_mark_dirty(folio);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
} else {
clear_bit(META_dirty, &mp->flag);
folio_unlock(folio);
}
clear_bit(META_forcewrite, &mp->flag); SourceFile:fs/jfs/jfs_metapage.c]Results: map[] |
| 311/4 | 2026/03/16 16:29 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 29505 output: 49 thoughts: 1279 Thoughts: **Analyzing Metapage Synchronization**
Okay, so I'm looking at this `force_metapage` function and I need to make sure I haven't missed any synchronization issues with that `META_sync` bit. I had a quick thought: should `clear_bit(META_sync, &mp->flag)` also be done when `folio->mapping` is NULL in `force_metapage`? Let's check the original code:
Initially, in the base code I see:
```c
set_bit(META_forcewrite, &mp->flag);
clear_bit(META_sync, &mp->flag);
folio_get(folio);
folio_lock(folio);
folio_mark_dirty(folio);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
clear_bit(META_forcewrite, &mp->flag);
folio_put(folio);
```
But, `clear_bit(META_sync, &mp->flag)` is already performed before the `folio_lock(folio)`. So, that answers that. The initial sequence of operations seems sound. Let's see the changes in `fs/jfs/jfs_metapage.c`, where there are a few functions that could be related to my concern.
Here are the related functions:
The `insert_metapage`, `remove_metapage`, `force_metapage` and `release_metapage` functions are:
`insert_metapage`:
```c
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a;
int index;
int l2mp_blocks; /* log2 blocks per metapage */
a = folio->private;
if (!a) {
a = kzalloc_obj(struct meta_anchor, GFP_NOFS);
if (!a)
return -ENOMEM;
folio_attach_private(folio, a);
kmap(&folio->page);
}
if (mp) {
l2mp_blocks = L2PSIZE - mp->sb->s_blocksize_bits;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
a->mp_count++;
a->mp[index] = mp;
}
return 0;
}
```
`remove_metapage`:
```c
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a = folio->private;
int l2mp_blocks = L2PSIZE - mp->sb->s_blocksize_bits;
int index;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
BUG_ON(a->mp[index] != mp);
a->mp[index] = NULL;
if (--a->mp_count == 0) {
kfree(a);
folio_detach_private(folio);
kunmap(&folio->page);
}
}
```
`force_metapage`:
```c
void force_metapage(struct metapage *mp)
{
struct folio *folio = mp->folio;
jfs_info("force_metapage: mp = 0x%p", mp);
set_bit(META_forcewrite, &mp->flag);
clear_bit(META_sync, &mp->flag);
folio_get(folio);
folio_lock(folio);
if (folio->mapping) {
folio_mark_dirty(folio);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
} else {
clear_bit(META_dirty, &mp->flag);
folio_unlock(folio);
}
clear_bit(META_forcewrite, &mp->flag);
folio_put(folio);
}
```
`release_metapage`:
```c
void release_metapage(struct metapage * mp)
{
struct folio *folio = mp->folio;
jfs_info("
|
| 312/4 | 2026/03/16 16:29 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:metapage_write_one] Results: map[SourceCode: 803: static int metapage_write_one(struct folio *folio)
804: {
805: struct address_space *mapping = folio->mapping;
806: struct writeback_control wbc = {
807: .sync_mode = WB_SYNC_ALL,
808: .nr_to_write = folio_nr_pages(folio),
809: };
810: int ret = 0;
811:
812: BUG_ON(!folio_test_locked(folio));
813:
814: folio_wait_writeback(folio);
815:
816: if (folio_clear_dirty_for_io(folio)) {
817: folio_get(folio);
818: ret = metapage_write_folio(folio, &wbc);
819: if (ret == 0)
820: folio_wait_writeback(folio);
821: folio_put(folio);
822: } else {
823: folio_unlock(folio);
824: }
825:
826: if (!ret)
827: ret = filemap_check_errors(mapping);
828: return ret;
829: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 313/4 | 2026/03/16 16:29 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 32259 output: 47 thoughts: 577 Thoughts: **Analyzing Folio Locking/Unlocking in Metapage Operations** Okay, let's break this down. My initial thought process was: `metapage_write_folio` and `metapage_write_one` both appear to unlock the folio. Looking at `force_metapage`, after the call to `metapage_write_one`, the folio is unlocked. Therefore, if `folio->mapping` is NULL, I figured we *should* unlock the folio immediately. The snippet in `force_metapage` seemed to confirm this, with the check for `folio->mapping` and subsequent `folio_unlock` call when it's NULL. That part looks right. But, hang on... let's check `release_metapage`. *This* is where things get interesting. In `release_metapage`, when `folio->mapping` is NULL, we *don't* call `metapage_write_one`. The folio *remains* locked at this point. Then, we go through `drop_metapage(folio, mp);` and *then* finally call `folio_unlock(folio);`. Aha! This actually *does* match what happens in the case where the folio is dirty but not being synchronized (i.e. `META_dirty` is set but `META_sync` is clear), where `metapage_write_one` isn't called, and we rely on the final `folio_unlock` call. So, even though the control flow is different, the *locking behavior* is consistent. We're unlocking in `release_metapage` in cases where `folio->mapping` is NULL, ensuring the folio doesn't stay locked indefinitely. Now, let's just make sure `drop_metapage` doesn't cause any issues. |
| 314/4 | 2026/03/16 16:30 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:drop_metapage] Results: map[SourceCode: 309: static inline void drop_metapage(struct folio *folio, struct metapage *mp)
310: {
311: if (mp->count || mp->nohomeok || test_bit(META_dirty, &mp->flag) ||
312: test_bit(META_io, &mp->flag))
313: return;
314: remove_metapage(folio, mp);
315: INCREMENT(mpStat.pagefree);
316: free_metapage(mp);
317: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 315/4 | 2026/03/16 16:30 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 33468 output: 30 thoughts: 698 Thoughts: **Analyzing Metapage Handling** Okay, so I'm walking through this metapage management logic, and the core idea seems sound. If `META_dirty` is cleared, `drop_metapage` is going to call `remove_metapage` and `free_metapage`, which is precisely what I'd expect. No red flags there. But then, I need to consider a potential edge case: what if `mp->lsn` has a non-zero value? Looking at `release_metapage`, it seems the logic handles this correctly: the `META_dirty` and `META_sync` bits are cleared and `remove_from_logsync` is called if `mp->lsn` is set. So far, so good, this aligns with the usual writeback behavior. Alright, let's revisit `metapage_write_one`. The critical part is that if `folio_clear_dirty_for_io` returns true, the folio is dirtied. It gets the folio, writes it and then either waits for the writeback or unlocks the folio if dirty. Now, looking at `release_metapage` again, it calls `folio_lock(folio)` after `metapage_write_one`. So the folio is locked again after the write. Then, `drop_metapage` is called, and finally, `folio_unlock(folio)` is called. This sequence ensures proper locking and unlocking, so no obvious issues there. Now, I'm going to shift focus and dig into the `insert_metapage` and `remove_metapage` changes. It uses the block size of the superblock. I need to make sure `mp->sb` is always valid when `insert_metapage` is called. It depends on where it's used. Let me go find the call sites for `insert_metapage` and verify this assumption. |
| 316/4 | 2026/03/16 16:30 | tool | codesearch-find-references |
0mArgs:map[IncludeSnippetLines:10 Name:insert_metapage] Results: map[References:[map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:__get_metapage SourceFile:fs/jfs/jfs_metapage.c SourceLine:772 SourceSnippet: 762: goto unlock;
763: mp->folio = folio;
764: mp->sb = inode->i_sb;
765: mp->flag = 0;
766: mp->xflag = COMMIT_PAGE;
767: mp->count = 1;
768: mp->nohomeok = 0;
769: mp->logical_size = size;
770: mp->data = folio_address(folio) + page_offset;
771: mp->index = lblock;
772: if (unlikely(insert_metapage(folio, mp))) {
773: free_metapage(mp);
774: goto unlock;
775: }
776: lock_metapage(mp);
777: }
778:
779: if (new) {
780: jfs_info("zeroing mp = 0x%p", mp);
781: memset(mp->data, 0, PSIZE);
782: }
] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:__metapage_migrate_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:248 SourceSnippet: 238: int rc;
239:
240: mp = folio_to_mp(src, 0);
241: if (metapage_locked(mp))
242: return -EAGAIN;
243:
244: rc = filemap_migrate_folio(mapping, dst, src, mode);
245: if (rc)
246: return rc;
247:
248: if (unlikely(insert_metapage(dst, mp)))
249: return -EAGAIN;
250:
251: page_offset = mp->data - folio_address(src);
252: mp->data = folio_address(dst) + page_offset;
253: mp->folio = dst;
254: remove_metapage(src, mp);
255:
256: return 0;
257: }
] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:metapage_read_folio SourceFile:fs/jfs/jfs_metapage.c SourceLine:586 SourceSnippet: 576: BUG_ON(!folio_test_locked(folio));
577: page_start = folio_pos(folio) >> inode->i_blkbits;
578:
579: block_offset = 0;
580: while (block_offset < blocks_per_page) {
581: xlen = blocks_per_page - block_offset;
582: pblock = metapage_get_blocks(inode, page_start + block_offset,
583: &xlen);
584: if (pblock) {
585: if (!folio->private)
586: insert_metapage(folio, NULL);
587: inc_io(folio);
588: if (bio)
589: submit_bio(bio);
590:
591: bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_READ,
592: GFP_NOFS);
593: bio->bi_iter.bi_sector =
594: pblock << (inode->i_blkbits - 9);
595: bio->bi_end_io = metapage_read_end_io;
596: bio->bi_private = folio;
]] TruncatedOutput:false] |
| 317/4 | 2026/03/16 16:30 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 35740 output: 17 thoughts: 670 Thoughts: **Analyzing Metapage Safety in JFS** Okay, let me break down my understanding here. I'm focusing on the safety of accessing `mp->sb->s_blocksize_bits` within the JFS metapage code. It seems like a few key functions are relevant. First, in `__get_metapage`, the critical `mp->sb` is set *before* `insert_metapage` is called. That's good; it establishes the foundation. Then, in `__metapage_migrate_folio`, the `mp` is *already* initialized, meaning `mp->sb` is valid before any operation that might use it. That reinforces my initial thought. Now, in the case of `metapage_read_folio`, we have `insert_metapage(folio, NULL)`. The crucial point here is that if the `mp` pointer happens to be NULL, `insert_metapage` *doesn't* dereference `mp->sb->s_blocksize_bits` within that specific call. So even if the pointer is null, there is no issue. This means in this instance, dereferencing the block size bits is indeed safe under this null condition. However, I need to zoom in and check specifically for the scenario of `MPS_PER_PAGE > 1` within `__metapage_migrate_folio`. My analysis highlights that the `insert_metapage(dst, mp)` call is using a valid `mp`. Hence, that access to `mp->sb->s_blocksize_bits` is also confirmed to be safe in this situation. It's a key part of the migration process, and the logic seems sound. To ensure comprehensive safety, I need to perform a full code review for any other instances where `folio->mapping` is accessed in `fs/jfs/jfs_metapage.c`, to ensure that I have covered all possible code paths. |
| 318/4 | 2026/03/16 16:30 | tool | grepper |
0mArgs:map[Expression:folio->mapping] Results: map[Output:
Full output is too long, showing 500 out of 2371 lines.
Use more precise expression if possible.
[Documentation/mm/process_addrs.rst=42=Terminology
--
Documentation/mm/process_addrs.rst-56- :c:struct:`!struct address_space` or :c:struct:`!struct anon_vma` object
Documentation/mm/process_addrs.rst:57: (reachable from a folio via :c:member:`!folio->mapping`). VMAs must be stabilised via
Documentation/mm/process_addrs.rst-58- :c:func:`!anon_vma_[try]lock_read` or :c:func:`!anon_vma_[try]lock_write` for
--
arch/s390/kernel/uv.c=296=int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio)
--
arch/s390/kernel/uv.c-341- if (!folio_test_dirty(folio) || folio_test_anon(folio) ||
arch/s390/kernel/uv.c:342: !folio->mapping || !mapping_can_writeback(folio->mapping)) {
arch/s390/kernel/uv.c-343- folio_unlock(folio);
--
arch/s390/kernel/uv.c-353- */
arch/s390/kernel/uv.c:354: mapping = folio->mapping;
arch/s390/kernel/uv.c-355- lstart = folio_pos(folio);
--
drivers/dax/device.c=83=static void dax_set_mapping(struct vm_fault *vmf, unsigned long pfn,
--
drivers/dax/device.c-100-
drivers/dax/device.c:101: if (folio->mapping)
drivers/dax/device.c-102- continue;
drivers/dax/device.c-103-
drivers/dax/device.c:104: folio->mapping = filp->f_mapping;
drivers/dax/device.c-105- folio->index = pgoff + i;
--
fs/affs/file.c=528=static int affs_do_read_folio_ofs(struct folio *folio, size_t to, int create)
fs/affs/file.c-529-{
fs/affs/file.c:530: struct inode *inode = folio->mapping->host;
fs/affs/file.c-531- struct super_block *sb = inode->i_sb;
--
fs/affs/file.c=631=static int affs_read_folio_ofs(struct file *file, struct folio *folio)
fs/affs/file.c-632-{
fs/affs/file.c:633: struct inode *inode = folio->mapping->host;
fs/affs/file.c-634- size_t to;
--
fs/affs/symlink.c=14=static int affs_symlink_read_folio(struct file *file, struct folio *folio)
--
fs/affs/symlink.c-16- struct buffer_head *bh;
fs/affs/symlink.c:17: struct inode *inode = folio->mapping->host;
fs/affs/symlink.c-18- char *link = folio_address(folio);
--
fs/befs/linuxvfs.c=471=static int befs_symlink_read_folio(struct file *unused, struct folio *folio)
fs/befs/linuxvfs.c-472-{
fs/befs/linuxvfs.c:473: struct inode *inode = folio->mapping->host;
fs/befs/linuxvfs.c-474- struct super_block *sb = inode->i_sb;
--
fs/btrfs/defrag.c=845=static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t index)
--
fs/btrfs/defrag.c-905- */
fs/btrfs/defrag.c:906: if (folio->mapping != mapping || !folio->private) {
fs/btrfs/defrag.c-907- folio_unlock(folio);
--
fs/btrfs/defrag.c-919- folio_lock(folio);
fs/btrfs/defrag.c:920: if (folio->mapping != mapping || !folio->private) {
fs/btrfs/defrag.c-921- folio_unlock(folio);
--
fs/btrfs/extent_io.c=302=static noinline int lock_delalloc_folios(struct inode *inode,
--
fs/btrfs/extent_io.c-330- folio_lock(folio);
fs/btrfs/extent_io.c:331: if (!folio_test_dirty(folio) || folio->mapping != mapping) {
fs/btrfs/extent_io.c-332- folio_unlock(folio);
--
fs/btrfs/extent_io.c=517=static void end_bbio_data_write(struct btrfs_bio *bbio)
--
fs/btrfs/extent_io.c-543- if (error)
fs/btrfs/extent_io.c:544: mapping_set_error(folio->mapping, error);
fs/btrfs/extent_io.c-545- btrfs_folio_clear_writeback(fs_info, folio, start, len);
--
fs/btrfs/extent_io.c=869=static int attach_extent_buffer_folio(struct extent_buffer *eb,
--
fs/btrfs/extent_io.c-881- */
fs/btrfs/extent_io.c:882: if (folio->mapping)
fs/btrfs/extent_io.c:883: lockdep_assert_held(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-884-
--
fs/btrfs/extent_io.c=908=int set_folio_extent_mapped(struct folio *folio)
--
fs/btrfs/extent_io.c-911-
fs/btrfs/extent_io.c:912: ASSERT(folio->mapping);
fs/btrfs/extent_io.c-913-
--
fs/btrfs/extent_io.c=926=void clear_folio_extent_mapped(struct folio *folio)
--
fs/btrfs/extent_io.c-929-
fs/btrfs/extent_io.c:930: ASSERT(folio->mapping);
fs/btrfs/extent_io.c-931-
--
fs/btrfs/extent_io.c=997=static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
--
fs/btrfs/extent_io.c-1000-{
fs/btrfs/extent_io.c:1001: struct inode *inode = folio->mapping->host;
fs/btrfs/extent_io.c-1002- struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
--
fs/btrfs/extent_io.c=1336=int btrfs_read_folio(struct file *file, struct folio *folio)
fs/btrfs/extent_io.c-1337-{
fs/btrfs/extent_io.c:1338: struct inode *vfs_inode = folio->mapping->host;
fs/btrfs/extent_io.c-1339- struct btrfs_inode *inode = BTRFS_I(vfs_inode);
--
fs/btrfs/extent_io.c=1835=static int extent_writepage(struct folio *folio, struct btrfs_bio_ctrl *bio_ctrl)
fs/btrfs/extent_io.c-1836-{
fs/btrfs/extent_io.c:1837: struct btrfs_inode *inode = BTRFS_I(folio->mapping->host);
fs/btrfs/extent_io.c-1838- struct btrfs_fs_info *fs_info = inode->root->fs_info;
--
fs/btrfs/extent_io.c-1913- if (ret < 0)
fs/btrfs/extent_io.c:1914: mapping_set_error(folio->mapping, ret);
fs/btrfs/extent_io.c-1915- /*
--
fs/btrfs/extent_io.c=2455=static int extent_write_cache_pages(struct address_space *mapping,
--
fs/btrfs/extent_io.c-2529- * the folio lock: the folio may be truncated or
fs/btrfs/extent_io.c:2530: * invalidated (changing folio->mapping to NULL).
fs/btrfs/extent_io.c-2531- */
--
fs/btrfs/extent_io.c-2536-
fs/btrfs/extent_io.c:2537: if (unlikely(folio->mapping != mapping)) {
fs/btrfs/extent_io.c-2538- folio_unlock(folio);
--
fs/btrfs/extent_io.c=2915=static bool folio_range_has_eb(struct folio *folio)
--
fs/btrfs/extent_io.c-2918-
fs/btrfs/extent_io.c:2919: lockdep_assert_held(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-2920-
--
fs/btrfs/extent_io.c=2929=static void detach_extent_buffer_folio(const struct extent_buffer *eb, struct folio *folio)
--
fs/btrfs/extent_io.c-2931- struct btrfs_fs_info *fs_info = eb->fs_info;
fs/btrfs/extent_io.c:2932: struct address_space *mapping = folio->mapping;
fs/btrfs/extent_io.c-2933- const bool mapped = !test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags);
--
fs/btrfs/extent_io.c=3252=static struct extent_buffer *grab_extent_buffer(struct btrfs_fs_info *fs_info,
--
fs/btrfs/extent_io.c-3256-
fs/btrfs/extent_io.c:3257: lockdep_assert_held(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-3258-
--
fs/btrfs/extent_io.c=3712=static void btree_clear_folio_dirty_tag(struct folio *folio)
--
fs/btrfs/extent_io.c-3715- ASSERT(folio_test_locked(folio));
fs/btrfs/extent_io.c:3716: xa_lock_irq(&folio->mapping->i_pages);
fs/btrfs/extent_io.c-3717- if (!folio_test_dirty(folio))
fs/btrfs/extent_io.c:3718: __xa_clear_mark(&folio->mapping->i_pages, folio->index,
fs/btrfs/extent_io.c-3719- PAGECACHE_TAG_DIRTY);
fs/btrfs/extent_io.c:3720: xa_unlock_irq(&folio->mapping->i_pages);
fs/btrfs/extent_io.c-3721-}
--
fs/btrfs/extent_io.c=4479=static int try_release_subpage_extent_buffer(struct folio *folio)
--
fs/btrfs/extent_io.c-4527- */
fs/btrfs/extent_io.c:4528: spin_lock(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-4529- if (!folio_test_private(folio))
--
fs/btrfs/extent_io.c-4532- ret = 0;
fs/btrfs/extent_io.c:4533: spin_unlock(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-4534- return ret;
--
fs/btrfs/extent_io.c=4537=int try_release_extent_buffer(struct folio *folio)
--
fs/btrfs/extent_io.c-4547- */
fs/btrfs/extent_io.c:4548: spin_lock(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-4549- if (!folio_test_private(folio)) {
fs/btrfs/extent_io.c:4550: spin_unlock(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-4551- return 1;
--
fs/btrfs/extent_io.c-4564- spin_unlock(&eb->refs_lock);
fs/btrfs/extent_io.c:4565: spin_unlock(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-4566- return 0;
fs/btrfs/extent_io.c-4567- }
fs/btrfs/extent_io.c:4568: spin_unlock(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-4569-
--
fs/btrfs/file.c=798=static int prepare_uptodate_folio(struct inode *inode, struct folio *folio, u64 pos,
--
fs/btrfs/file.c-830- */
fs/btrfs/file.c:831: if (folio->mapping != inode->i_mapping || !folio_test_private(folio)) {
fs/btrfs/file.c-832- folio_unlock(folio);
--
fs/btrfs/file.c=1851=static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
--
fs/btrfs/file.c-1921-
fs/btrfs/file.c:1922: if ((folio->mapping != inode->vfs_inode.i_mapping) ||
fs/btrfs/file.c-1923- (page_start >= size)) {
--
fs/btrfs/free-space-cache.c=446=static int io_ctl_prepare_pages(struct btrfs_io_ctl *io_ctl, bool uptodate)
--
fs/btrfs/free-space-cache.c-475- folio_lock(folio);
fs/btrfs/free-space-cache.c:476: if (folio->mapping != inode->i_mapping) {
fs/btrfs/free-space-cache.c-477- btrfs_err(BTRFS_I(inode)->root->fs_info,
--
fs/btrfs/inode.c=2837=static void btrfs_writepage_fixup_worker(struct btrfs_work *work)
--
fs/btrfs/inode.c-2862- * Before we queued this fixup, we took a reference on the folio.
fs/btrfs/inode.c:2863: * folio->mapping may go NULL, but it shouldn't be moved to a different
fs/btrfs/inode.c-2864- * address space.
fs/btrfs/inode.c-2865- */
fs/btrfs/inode.c:2866: if (!folio->mapping || !folio_test_dirty(folio) ||
fs/btrfs/inode.c-2867- !folio_test_checked(folio)) {
--
fs/btrfs/inode.c-2943- */
fs/btrfs/inode.c:2944: mapping_set_error(folio->mapping, ret);
fs/btrfs/inode.c-2945- btrfs_mark_ordered_io_finished(inode, folio, page_start,
--
fs/btrfs/inode.c=2973=int btrfs_writepage_cow_fixup(struct folio *folio)
fs/btrfs/inode.c-2974-{
fs/btrfs/inode.c:2975: struct inode *inode = folio->mapping->host;
fs/btrfs/inode.c-2976- struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
--
fs/btrfs/inode.c-3016- * takes place outside of the folio lock, and we can't trust
fs/btrfs/inode.c:3017: * folio->mapping outside of the folio lock.
fs/btrfs/inode.c-3018- */
--
fs/btrfs/inode.c=4973=static int truncate_block_zero_beyond_eof(struct btrfs_inode *inode, u64 start)
--
fs/btrfs/inode.c-4990- folio_lock(folio);
fs/btrfs/inode.c:4991: if (folio->mapping != mapping) {
fs/btrfs/inode.c-4992- folio_unlock(folio);
--
fs/btrfs/inode.c=5036=int btrfs_truncate_block(struct btrfs_inode *inode, u64 offset, u64 start, u64 end)
--
fs/btrfs/inode.c-5134- folio_lock(folio);
fs/btrfs/inode.c:5135: if (folio->mapping != mapping) {
fs/btrfs/inode.c-5136- folio_unlock(folio);
--
fs/btrfs/ordered-data.c=350=static bool can_finish_ordered_extent(struct btrfs_ordered_extent *ordered,
--
fs/btrfs/ordered-data.c-359- if (folio) {
fs/btrfs/ordered-data.c:360: ASSERT(folio->mapping);
fs/btrfs/ordered-data.c-361- ASSERT(folio_pos(folio) <= file_offset);
--
fs/btrfs/relocation.c=2778=static int relocate_one_folio(struct reloc_control *rc,
--
fs/btrfs/relocation.c-2831- }
fs/btrfs/relocation.c:2832: if (folio->mapping != inode->i_mapping) {
fs/btrfs/relocation.c-2833- folio_unlock(folio);
--
fs/btrfs/send.c=5198=static int put_file_data(struct send_ctx *sctx, u64 offset, u32 len)
--
fs/btrfs/send.c-5249- }
fs/btrfs/send.c:5250: if (folio->mapping != mapping) {
fs/btrfs/send.c-5251- folio_unlock(folio);
--
fs/btrfs/subpage.c=66=int btrfs_attach_folio_state(const struct btrfs_fs_info *fs_info,
--
fs/btrfs/subpage.c-78- */
fs/btrfs/subpage.c:79: if (folio->mapping)
fs/btrfs/subpage.c-80- ASSERT(folio_test_locked(folio));
--
fs/btrfs/subpage.c=148=void btrfs_folio_inc_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *folio)
--
fs/btrfs/subpage.c-154-
fs/btrfs/subpage.c:155: ASSERT(folio_test_private(folio) && folio->mapping);
fs/btrfs/subpage.c:156: lockdep_assert_held(&folio->mapping->i_private_lock);
fs/btrfs/subpage.c-157-
--
fs/btrfs/subpage.c=162=void btrfs_folio_dec_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *folio)
--
fs/btrfs/subpage.c-168-
fs/btrfs/subpage.c:169: ASSERT(folio_test_private(folio) && folio->mapping);
fs/btrfs/subpage.c:170: lockdep_assert_held(&folio->mapping->i_private_lock);
fs/btrfs/subpage.c-171-
--
fs/btrfs/subpage.c=177=static void btrfs_subpage_assert(const struct btrfs_fs_info *fs_info,
--
fs/btrfs/subpage.c-187- */
fs/btrfs/subpage.c:188: if (folio->mapping)
fs/btrfs/subpage.c-189- ASSERT(folio_pos(folio) <= start &&
--
fs/btrfs/subpage.h=107=static inline bool btrfs_is_subpage(const struct btrfs_fs_info *fs_info,
--
fs/btrfs/subpage.h-109-{
fs/btrfs/subpage.h:110: if (folio->mapping && folio->mapping->host)
fs/btrfs/subpage.h:111: ASSERT(is_data_inode(BTRFS_I(folio->mapping->host)));
fs/btrfs/subpage.h-112- return fs_info->sectorsize < folio_size(folio);
--
fs/buffer.c=349=static void end_buffer_async_read_io(struct buffer_head *bh, int uptodate)
fs/buffer.c-350-{
fs/buffer.c:351: struct inode *inode = bh->b_folio->mapping->host;
fs/buffer.c-352- bool decrypt = fscrypt_inode_uses_fs_layer_crypto(inode);
--
fs/buffer.c=675=void mark_buffer_dirty_inode(struct buffer_head *bh, struct inode *inode)
--
fs/buffer.c-677- struct address_space *mapping = inode->i_mapping;
fs/buffer.c:678: struct address_space *buffer_mapping = bh->b_folio->mapping;
fs/buffer.c-679-
--
fs/buffer.c=1117=__getblk_slow(struct block_device *bdev, sector_t block,
--
fs/buffer.c-1174- *
fs/buffer.c:1175: * mark_buffer_dirty() is atomic. It takes bh->b_folio->mapping->i_private_lock,
fs/buffer.c-1176- * i_pages lock and mapping->host->i_lock.
--
fs/buffer.c=1178=void mark_buffer_dirty(struct buffer_head *bh)
--
fs/buffer.c-1200- if (!folio_test_set_dirty(folio)) {
fs/buffer.c:1201: mapping = folio->mapping;
fs/buffer.c-1202- if (mapping)
--
fs/buffer.c=1211=void mark_buffer_write_io_error(struct buffer_head *bh)
--
fs/buffer.c-1214- /* FIXME: do we need to set this in both places? */
fs/buffer.c:1215: if (bh->b_folio && bh->b_folio->mapping)
fs/buffer.c:1216: mapping_set_error(bh->b_folio->mapping, -EIO);
fs/buffer.c-1217- if (bh->b_assoc_map)
--
fs/buffer.c=1245=void __bforget(struct buffer_head *bh)
--
fs/buffer.c-1248- if (bh->b_assoc_map) {
fs/buffer.c:1249: struct address_space *buffer_mapping = bh->b_folio->mapping;
fs/buffer.c-1250-
--
fs/buffer.c=1682=struct buffer_head *create_empty_buffers(struct folio *folio,
--
fs/buffer.c-1696-
fs/buffer.c:1697: spin_lock(&folio->mapping->i_private_lock);
fs/buffer.c-1698- if (folio_test_uptodate(folio) || folio_test_dirty(folio)) {
--
fs/buffer.c-1708- folio_attach_private(folio, head);
fs/buffer.c:1709: spin_unlock(&folio->mapping->i_private_lock);
fs/buffer.c-1710-
--
fs/buffer.c=1833=int __block_write_full_folio(struct inode *inode, struct folio *folio,
--
fs/buffer.c-1978- BUG_ON(folio_test_writeback(folio));
fs/buffer.c:1979: mapping_set_error(folio->mapping, err);
fs/buffer.c-1980- folio_start_writeback(folio);
--
fs/buffer.c=2107=int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len,
--
fs/buffer.c-2111- size_t to = from + len;
fs/buffer.c:2112: struct inode *inode = folio->mapping->host;
fs/buffer.c-2113- size_t block_start, block_end;
--
fs/buffer.c=2386=int block_read_full_folio(struct folio *folio, get_block_t *get_block)
fs/buffer.c-2387-{
fs/buffer.c:2388: struct inode *inode = folio->mapping->host;
fs/buffer.c-2389- sector_t iblock, lblock;
--
fs/buffer.c=2613=int block_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf,
--
fs/buffer.c-2623- size = i_size_read(inode);
fs/buffer.c:2624: if ((folio->mapping != inode->i_mapping) ||
fs/buffer.c-2625- (folio_pos(folio) >= size)) {
--
fs/buffer.c=2725=int block_write_full_folio(struct folio *folio, struct writeback_control *wbc,
--
fs/buffer.c-2727-{
fs/buffer.c:2728: struct inode * const inode = folio->mapping->host;
fs/buffer.c-2729- loff_t i_size = i_size_read(inode);
--
fs/buffer.c=2938=bool try_to_free_buffers(struct folio *folio)
fs/buffer.c-2939-{
fs/buffer.c:2940: struct address_space * const mapping = folio->mapping;
fs/buffer.c-2941- struct buffer_head *buffers_to_free = NULL;
--
fs/ceph/addr.c=140=static void ceph_invalidate_folio(struct folio *folio, size_t offset,
--
fs/ceph/addr.c-142-{
fs/ceph/addr.c:143: struct inode *inode = folio->mapping->host;
fs/ceph/addr.c-144- struct ceph_client *cl = ceph_inode_to_client(inode);
--
fs/ceph/addr.c=717=static int write_folio_nounlock(struct folio *folio,
--
fs/ceph/addr.c-720- struct page *page = &folio->page;
fs/ceph/addr.c:721: struct inode *inode = folio->mapping->host;
fs/ceph/addr.c-722- struct ceph_inode_info *ci = ceph_inode(inode);
--
fs/ceph/addr.c=1134=int ceph_check_page_before_write(struct address_space *mapping,
--
fs/ceph/addr.c-1144- /* only dirty folios, or our accounting breaks */
fs/ceph/addr.c:1145: if (unlikely(!folio_test_dirty(folio) || folio->mapping != mapping)) {
fs/ceph/addr.c-1146- doutc(cl, "!dirty or !mapping %p\n", folio);
--
fs/ceph/addr.c=1781=ceph_find_incompatible(struct folio *folio)
fs/ceph/addr.c-1782-{
fs/ceph/addr.c:1783: struct inode *inode = folio->mapping->host;
fs/ceph/addr.c-1784- struct ceph_client *cl = ceph_inode_to_client(inode);
--
fs/coda/symlink.c=23=static int coda_symlink_filler(struct file *file, struct folio *folio)
fs/coda/symlink.c-24-{
fs/coda/symlink.c:25: struct inode *inode = folio->mapping->host;
fs/coda/symlink.c-26- int error;
--
fs/cramfs/inode.c=821=static int cramfs_read_folio(struct file *file, struct folio *folio)
fs/cramfs/inode.c-822-{
fs/cramfs/inode.c:823: struct inode *inode = folio->mapping->host;
fs/cramfs/inode.c-824- u32 maxblock;
--
fs/crypto/crypto.c=172=struct page *fscrypt_encrypt_pagecache_blocks(struct folio *folio,
--
fs/crypto/crypto.c-174-{
fs/crypto/crypto.c:175: const struct inode *inode = folio->mapping->host;
fs/crypto/crypto.c-176- const struct fscrypt_inode_info *ci = fscrypt_get_inode_info_raw(inode);
--
fs/crypto/crypto.c=255=int fscrypt_decrypt_pagecache_blocks(struct folio *folio, size_t len,
--
fs/crypto/crypto.c-257-{
fs/crypto/crypto.c:258: const struct inode *inode = folio->mapping->host;
fs/crypto/crypto.c-259- const struct fscrypt_inode_info *ci = fscrypt_get_inode_info_raw(inode);
--
fs/dax.c=351=static inline bool dax_folio_is_shared(struct folio *folio)
fs/dax.c-352-{
fs/dax.c:353: return !folio->mapping && folio->share;
fs/dax.c-354-}
--
fs/dax.c=366=static void dax_folio_make_shared(struct folio *folio)
--
fs/dax.c-369- * folio is not currently shared so mark it as shared by clearing
fs/dax.c:370: * folio->mapping.
fs/dax.c-371- */
fs/dax.c:372: folio->mapping = NULL;
fs/dax.c-373-
--
fs/dax.c=381=static inline unsigned long dax_folio_put(struct folio *folio)
--
fs/dax.c-393-
fs/dax.c:394: folio->mapping = NULL;
fs/dax.c-395- order = folio_order(folio);
--
fs/dax.c-407-
fs/dax.c:408: new_folio->mapping = NULL;
fs/dax.c-409- /*
--
fs/dax.c=441=static void dax_associate_entry(void *entry, struct address_space *mapping,
--
fs/dax.c-451- index = linear_page_index(vma, address & ~(size - 1));
fs/dax.c:452: if (shared && (folio->mapping || dax_folio_is_shared(folio))) {
fs/dax.c:453: if (folio->mapping)
fs/dax.c-454- dax_folio_make_shared(folio);
--
fs/dax.c-459- } else {
fs/dax.c:460: WARN_ON_ONCE(folio->mapping);
fs/dax.c-461- dax_folio_init(entry);
fs/dax.c-462- folio = dax_to_folio(entry);
fs/dax.c:463: folio->mapping = mapping;
fs/dax.c-464- folio->index = index;
--
fs/dax.c=500=dax_entry_t dax_lock_folio(struct folio *folio)
--
fs/dax.c-504-
fs/dax.c:505: /* Ensure folio->mapping isn't freed while we look at it */
fs/dax.c-506- rcu_read_lock();
fs/dax.c-507- for (;;) {
fs/dax.c:508: struct address_space *mapping = READ_ONCE(folio->mapping);
fs/dax.c-509-
--
fs/dax.c-526- xas_lock_irq(&xas);
fs/dax.c:527: if (mapping != folio->mapping) {
fs/dax.c-528- xas_unlock_irq(&xas);
--
fs/dax.c=547=void dax_unlock_folio(struct folio *folio, dax_entry_t cookie)
fs/dax.c-548-{
fs/dax.c:549: struct address_space *mapping = folio->mapping;
fs/dax.c-550- XA_STATE(xas, &mapping->i_pages, folio->index);
--
fs/ecryptfs/crypto.c=350=int ecryptfs_encrypt_page(struct folio *folio)
--
fs/ecryptfs/crypto.c-359-
fs/ecryptfs/crypto.c:360: ecryptfs_inode = folio->mapping->host;
fs/ecryptfs/crypto.c-361- crypt_stat =
--
fs/ecryptfs/crypto.c=420=int ecryptfs_decrypt_page(struct folio *folio)
--
fs/ecryptfs/crypto.c-428-
fs/ecryptfs/crypto.c:429: ecryptfs_inode = folio->mapping->host;
fs/ecryptfs/crypto.c-430- crypt_stat =
--
fs/ecryptfs/mmap.c=91=ecryptfs_copy_up_encrypted_with_header(struct folio *folio,
--
fs/ecryptfs/mmap.c-116- rc = ecryptfs_read_xattr_region(
fs/ecryptfs/mmap.c:117: page_virt, folio->mapping->host);
fs/ecryptfs/mmap.c-118- strip_xattr_flag(page_virt + 16, crypt_stat);
--
fs/ecryptfs/mmap.c-138- (lower_offset & ~PAGE_MASK),
fs/ecryptfs/mmap.c:139: crypt_stat->extent_size, folio->mapping->host);
fs/ecryptfs/mmap.c-140- if (rc) {
--
fs/ecryptfs/mmap.c=163=static int ecryptfs_read_folio(struct file *file, struct folio *folio)
fs/ecryptfs/mmap.c-164-{
fs/ecryptfs/mmap.c:165: struct inode *inode = folio->mapping->host;
fs/ecryptfs/mmap.c-166- struct ecryptfs_crypt_stat *crypt_stat =
--
fs/ecryptfs/mmap.c=214=static int fill_zeros_to_end_of_page(struct folio *folio, unsigned int to)
fs/ecryptfs/mmap.c-215-{
fs/ecryptfs/mmap.c:216: struct inode *inode = folio->mapping->host;
fs/ecryptfs/mmap.c-217- int end_byte_in_page;
--
fs/efs/symlink.c=15=static int efs_symlink_read_folio(struct file *file, struct folio *folio)
--
fs/efs/symlink.c-18- struct buffer_head *bh;
fs/efs/symlink.c:19: struct inode *inode = folio->mapping->host;
fs/efs/symlink.c-20- efs_block_t size = inode->i_size;
--
fs/erofs/fscache.c=209=static int erofs_fscache_meta_read_folio(struct file *data, struct folio *folio)
fs/erofs/fscache.c-210-{
fs/erofs/fscache.c:211: struct erofs_fscache *ctx = folio->mapping->host->i_private;
fs/erofs/fscache.c-212- int ret = -ENOMEM;
--
fs/erofs/fscache.c-215-
fs/erofs/fscache.c:216: req = erofs_fscache_req_alloc(folio->mapping,
fs/erofs/fscache.c-217- folio_pos(folio), folio_size(folio));
--
]
] |
| 319/4 | 2026/03/16 16:30 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 46829 output: 49 thoughts: 96 Thoughts: **Analyzing Folio Mapping Usage in JFS** Alright, let's break this down. It's clear that the `folio->mapping` field is a commonly employed element. My initial focus is on its application within the JFS file system, specifically looking at `fs/jfs/jfs_metapage.c`. It appears that `folio->mapping` is primarily referenced in the functions `insert_metapage`, `remove_metapage`, and `metapage_write_folio`. However, hold on a second… I see that `metapage_write_folio` actually leverages `folio->mapping->host`. That's a key distinction and warrants closer inspection. Time to dive deeper into `metapage_write_folio` to understand how the host is being utilized in the context of these metapage operations. |
| 320/4 | 2026/03/16 16:30 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:metapage_write_folio] Results: map[SourceCode: 423: static int metapage_write_folio(struct folio *folio,
424: struct writeback_control *wbc)
425: {
426: struct bio *bio = NULL;
427: int block_offset; /* block offset of mp within page */
428: struct inode *inode = folio->mapping->host;
429: int blocks_per_mp = JFS_SBI(inode->i_sb)->nbperpage;
430: int len;
431: int xlen;
432: struct metapage *mp;
433: int redirty = 0;
434: sector_t lblock;
435: int nr_underway = 0;
436: sector_t pblock;
437: sector_t next_block = 0;
438: sector_t page_start;
439: unsigned long bio_bytes = 0;
440: unsigned long bio_offset = 0;
441: int offset;
442: int bad_blocks = 0;
443:
444: page_start = folio_pos(folio) >> inode->i_blkbits;
445: BUG_ON(!folio_test_locked(folio));
446: BUG_ON(folio_test_writeback(folio));
447: folio_start_writeback(folio);
448:
449: for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
450: mp = folio_to_mp(folio, offset);
451:
452: if (!mp || !test_bit(META_dirty, &mp->flag))
453: continue;
454:
455: if (mp->nohomeok && !test_bit(META_forcewrite, &mp->flag)) {
456: redirty = 1;
457: /*
458: * Make sure this page isn't blocked indefinitely.
459: * If the journal isn't undergoing I/O, push it
460: */
461: if (mp->log && !(mp->log->cflag & logGC_PAGEOUT))
462: jfs_flush_journal(mp->log, 0);
463: continue;
464: }
465:
466: clear_bit(META_dirty, &mp->flag);
467: set_bit(META_io, &mp->flag);
468: block_offset = offset >> inode->i_blkbits;
469: lblock = page_start + block_offset;
470: if (bio) {
471: if (xlen && lblock == next_block) {
472: /* Contiguous, in memory & on disk */
473: len = min(xlen, blocks_per_mp);
474: xlen -= len;
475: bio_bytes += len << inode->i_blkbits;
476: continue;
477: }
478: /* Not contiguous */
479: bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
480: /*
481: * Increment counter before submitting i/o to keep
482: * count from hitting zero before we're through
483: */
484: inc_io(folio);
485: if (!bio->bi_iter.bi_size)
486: goto dump_bio;
487: submit_bio(bio);
488: nr_underway++;
489: bio = NULL;
490: } else
491: inc_io(folio);
492: xlen = (folio_size(folio) - offset) >> inode->i_blkbits;
493: pblock = metapage_get_blocks(inode, lblock, &xlen);
494: if (!pblock) {
495: printk(KERN_ERR "JFS: metapage_get_blocks failed\n");
496: /*
497: * We already called inc_io(), but can't cancel it
498: * with dec_io() until we're done with the page
499: */
500: bad_blocks++;
501: continue;
502: }
503: len = min(xlen, (int)JFS_SBI(inode->i_sb)->nbperpage);
504:
505: bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_WRITE, GFP_NOFS);
506: bio->bi_iter.bi_sector = pblock << (inode->i_blkbits - 9);
507: bio->bi_end_io = metapage_write_end_io;
508: bio->bi_private = folio;
509:
510: /* Don't call bio_add_page yet, we may add to this vec */
511: bio_offset = offset;
512: bio_bytes = len << inode->i_blkbits;
513:
514: xlen -= len;
515: next_block = lblock + len;
516: }
517: if (bio) {
518: bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
519: if (!bio->bi_iter.bi_size)
520: goto dump_bio;
521:
522: submit_bio(bio);
523: nr_underway++;
524: }
525: if (redirty)
526: folio_redirty_for_writepage(wbc, folio);
527:
528: folio_unlock(folio);
529:
530: if (bad_blocks)
531: goto err_out;
532:
533: if (nr_underway == 0)
534: folio_end_writeback(folio);
535:
536: return 0;
537: dump_bio:
538: print_hex_dump(KERN_ERR, "JFS: dump of bio: ", DUMP_PREFIX_ADDRESS, 16,
539: 4, bio, sizeof(*bio), 0);
540: bio_put(bio);
541: folio_unlock(folio);
542: dec_io(folio, BLK_STS_OK, last_write_complete);
543: err_out:
544: while (bad_blocks--)
545: dec_io(folio, BLK_STS_OK, last_write_complete);
546: return -EIO;
547: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 321/4 | 2026/03/16 16:30 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 49119 output: 48 thoughts: 111 Thoughts: **Digging into a Potential Folio Mapping Issue** Okay, so I'm revisiting this `metapage_write_folio` function. My main concern is that it seems to be relying on `folio->mapping->host`. Now, if that `folio->mapping` is NULL... we're going to have a bad day, probably a crash. But didn't we just patch up the `release_metapage` and `force_metapage` functions to explicitly check if `folio->mapping` is actually valid *before* they even get to calling `metapage_write_one`, which, of course, then calls the problematic `metapage_write_folio`? That's a relief, it means we did a good job protecting that particular codepath. Wait, are there *other* callers of `metapage_write_folio`? We need to verify that we haven't missed any. Let's see... Yep, looks like `metapage_writepages` is also calling this function. Okay, now we need to check how *it* handles `folio->mapping`. |
| 322/4 | 2026/03/16 16:30 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:metapage_writepages] Results: map[SourceCode: 549: static int metapage_writepages(struct address_space *mapping,
550: struct writeback_control *wbc)
551: {
552: struct blk_plug plug;
553: struct folio *folio = NULL;
554: int err;
555:
556: blk_start_plug(&plug);
557: while ((folio = writeback_iter(mapping, wbc, folio, &err)))
558: err = metapage_write_folio(folio, wbc);
559: blk_finish_plug(&plug);
560:
561: return err;
562: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 323/4 | 2026/03/16 16:30 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 49749 output: 41 thoughts: 98 Thoughts: **Analyzing `metapage_writepages` and `writeback_iter`** Okay, so I'm digging into `metapage_writepages` and I see it uses `writeback_iter`. Right, `writeback_iter` is the workhorse here, going through the folios within the mapping. Now, I'm trying to mentally trace through a potential edge case. I'm wondering about what happens with a truncated folio. If a folio gets truncated *while* `writeback_iter` is running, could it still get processed? Specifically, since `writeback_iter` might return it if it was locked? Hmm, the key seems to be what happens to `folio->mapping` upon truncation. Ah, yes! When a folio is truncated, `folio->mapping` goes to NULL. That's the crucial point. So, the real question becomes: does `writeback_iter` perform a check on `folio->mapping` at any point during its iteration? It *has* to, right? Otherwise, we'd have a classic use-after-free scenario, leading to all sorts of corruption. Time to dive into the `writeback_iter` code and confirm this assumption. Let's see... |
| 324/4 | 2026/03/16 16:30 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:mm/page-writeback.c IncludeLines:true Name:writeback_iter] Results: map[SourceCode:2428: /**
2429: * writeback_iter - iterate folio of a mapping for writeback
2430: * @mapping: address space structure to write
2431: * @wbc: writeback context
2432: * @folio: previously iterated folio (%NULL to start)
2433: * @error: in-out pointer for writeback errors (see below)
2434: *
2435: * This function returns the next folio for the writeback operation described by
2436: * @wbc on @mapping and should be called in a while loop in the ->writepages
2437: * implementation.
2438: *
2439: * To start the writeback operation, %NULL is passed in the @folio argument, and
2440: * for every subsequent iteration the folio returned previously should be passed
2441: * back in.
2442: *
2443: * If there was an error in the per-folio writeback inside the writeback_iter()
2444: * loop, @error should be set to the error value.
2445: *
2446: * Once the writeback described in @wbc has finished, this function will return
2447: * %NULL and if there was an error in any iteration restore it to @error.
2448: *
2449: * Note: callers should not manually break out of the loop using break or goto
2450: * but must keep calling writeback_iter() until it returns %NULL.
2451: *
2452: * Return: the folio to write or %NULL if the loop is done.
2453: */
2454: struct folio *writeback_iter(struct address_space *mapping,
2455: struct writeback_control *wbc, struct folio *folio, int *error)
2456: {
2457: if (!folio) {
2458: folio_batch_init(&wbc->fbatch);
2459: wbc->saved_err = *error = 0;
2460:
2461: /*
2462: * For range cyclic writeback we remember where we stopped so
2463: * that we can continue where we stopped.
2464: *
2465: * For non-cyclic writeback we always start at the beginning of
2466: * the passed in range.
2467: */
2468: if (wbc->range_cyclic)
2469: wbc->index = mapping->writeback_index;
2470: else
2471: wbc->index = wbc->range_start >> PAGE_SHIFT;
2472:
2473: /*
2474: * To avoid livelocks when other processes dirty new pages, we
2475: * first tag pages which should be written back and only then
2476: * start writing them.
2477: *
2478: * For data-integrity writeback we have to be careful so that we
2479: * do not miss some pages (e.g., because some other process has
2480: * cleared the TOWRITE tag we set). The rule we follow is that
2481: * TOWRITE tag can be cleared only by the process clearing the
2482: * DIRTY tag (and submitting the page for I/O).
2483: */
2484: if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages)
2485: tag_pages_for_writeback(mapping, wbc->index,
2486: wbc_end(wbc));
2487: } else {
2488: wbc->nr_to_write -= folio_nr_pages(folio);
2489:
2490: WARN_ON_ONCE(*error > 0);
2491:
2492: /*
2493: * For integrity writeback we have to keep going until we have
2494: * written all the folios we tagged for writeback above, even if
2495: * we run past wbc->nr_to_write or encounter errors.
2496: * We stash away the first error we encounter in wbc->saved_err
2497: * so that it can be retrieved when we're done. This is because
2498: * the file system may still have state to clear for each folio.
2499: *
2500: * For background writeback we exit as soon as we run past
2501: * wbc->nr_to_write or encounter the first error.
2502: */
2503: if (wbc->sync_mode == WB_SYNC_ALL) {
2504: if (*error && !wbc->saved_err)
2505: wbc->saved_err = *error;
2506: } else {
2507: if (*error || wbc->nr_to_write <= 0)
2508: goto done;
2509: }
2510: }
2511:
2512: folio = writeback_get_folio(mapping, wbc);
2513: if (!folio) {
2514: /*
2515: * To avoid deadlocks between range_cyclic writeback and callers
2516: * that hold folios in writeback to aggregate I/O until
2517: * the writeback iteration finishes, we do not loop back to the
2518: * start of the file. Doing so causes a folio lock/folio
2519: * writeback access order inversion - we should only ever lock
2520: * multiple folios in ascending folio->index order, and looping
2521: * back to the start of the file violates that rule and causes
2522: * deadlocks.
2523: */
2524: if (wbc->range_cyclic)
2525: mapping->writeback_index = 0;
2526:
2527: /*
2528: * Return the first error we encountered (if there was any) to
2529: * the caller.
2530: */
2531: *error = wbc->saved_err;
2532: }
2533: return folio;
2534:
2535: done:
2536: if (wbc->range_cyclic)
2537: mapping->writeback_index = folio_next_index(folio);
2538: folio_batch_release(&wbc->fbatch);
2539: return NULL;
2540: }
SourceFile:mm/page-writeback.c] |
| 325/4 | 2026/03/16 16:30 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 51963 output: 32 thoughts: 472 Thoughts: **Analyzing the Safety of `metapage_write_folio`**
Okay, so let me break this down. From what I see, `writeback_get_folio` is designed to be careful. It grabs a folio lock, and then *crucially* verifies `folio->mapping == mapping` before proceeding. This means that when `metapage_writepages` calls `metapage_write_folio`, it should *only* be dealing with folios that definitely have a non-NULL `folio->mapping`. Therefore, I thought `metapage_write_folio` should be safe from potential issues relating to a NULL `folio->mapping` thanks to that filtering.
But then I thought, what about `metapage_write_one`? Let's look at the code:
```c
static int metapage_write_one(struct folio *folio)
{
struct address_space *mapping = folio->mapping;
struct writeback_control wbc = {
.sync_mode = WB_SYNC_ALL,
.nr_to_write = folio_nr_pages(folio),
};
int ret = 0;
BUG_ON(!folio_test_locked(folio));
folio_wait_writeback(folio);
if (folio_clear_dirty_for_io(folio)) {
folio_get(folio);
ret = metapage_write_folio(folio, &wbc);
if (ret == 0)
folio_wait_writeback(folio);
folio_put(folio);
} else {
folio_unlock(folio);
}
if (!ret)
ret = filemap_check_errors(mapping);
return ret;
}
```
Now, if `metapage_write_one` were to be called with a folio where `folio->mapping` is NULL, wouldn't there be a crash? At the line `struct address_space *mapping = folio->mapping;`? No, because in that case `mapping` would just be assigned to `NULL`. The crash would actually happen at `filemap_check_errors(mapping)` if `mapping` is NULL.
But hold on, didn't we fix `release_metapage` and `force_metapage` so they would *not* call `metapage_write_one` if `folio->mapping` is NULL? If that's right, then it means `metapage_write_one` should *never* receive a folio with a NULL `mapping` pointer. So it should be safe.
Still, I need to be completely sure. Are there any other callers of `metapage_write_one` that I need to investigate? Gotta double-check that this is truly secure.
|
| 326/4 | 2026/03/16 16:31 | tool | codesearch-find-references |
0mArgs:map[IncludeSnippetLines:10 Name:metapage_write_one] Results: map[References:[map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:force_metapage SourceFile:fs/jfs/jfs_metapage.c SourceLine:840 SourceSnippet: 831: void force_metapage(struct metapage *mp)
832: {
833: struct folio *folio = mp->folio;
834: jfs_info("force_metapage: mp = 0x%p", mp);
835: set_bit(META_forcewrite, &mp->flag);
836: clear_bit(META_sync, &mp->flag);
837: folio_get(folio);
838: folio_lock(folio);
839: folio_mark_dirty(folio);
840: if (metapage_write_one(folio))
841: jfs_error(mp->sb, "metapage_write_one() failed\n");
842: clear_bit(META_forcewrite, &mp->flag);
843: folio_put(folio);
844: }
] map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:release_metapage SourceFile:fs/jfs/jfs_metapage.c SourceLine:884 SourceSnippet: 874: if (--mp->count || mp->nohomeok) {
875: folio_unlock(folio);
876: folio_put(folio);
877: return;
878: }
879:
880: if (test_bit(META_dirty, &mp->flag)) {
881: folio_mark_dirty(folio);
882: if (test_bit(META_sync, &mp->flag)) {
883: clear_bit(META_sync, &mp->flag);
884: if (metapage_write_one(folio))
885: jfs_error(mp->sb, "metapage_write_one() failed\n");
886: folio_lock(folio);
887: }
888: } else if (mp->lsn) /* discard_metapage doesn't remove it */
889: remove_from_logsync(mp);
890:
891: /* Try to keep metapages from using up too much memory */
892: drop_metapage(folio, mp);
893:
894: folio_unlock(folio);
]] TruncatedOutput:false] |
| 327/4 | 2026/03/16 16:31 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 53801 output: 48 thoughts: 1321 Thoughts: **Double-Checking `metapage_write_one` Callers** Okay, so I've pinned down `force_metapage` and `release_metapage` as the only callers of `metapage_write_one`. The previous analysis suggested fixes, but let's meticulously re-evaluate these two functions. First, I'll revisit `force_metapage`: I see that `metapage_write_one` unlocks the folio. If `folio->mapping` is NULL, then `folio_unlock(folio)` is called directly. So in *both* scenarios, whether `folio->mapping` is valid or not, the folio *is* unlocked at some point before the end of the function. After that it's good, `clear_bit(META_forcewrite, &mp->flag)` and `folio_put(folio);` are performed. This sequence appears correct. Now, let's scrutinize `release_metapage`: Similar to the previous case, if `folio->mapping` is NULL, then the code clears `META_dirty` and `META_sync` flags, and, if `mp->lsn` has a value, it's removed from `logsync`. Then `drop_metapage(folio, mp)` is invoked, and finally `folio_unlock(folio)` is done. This also looks like it's correct. But, what if `folio->mapping` is *not* NULL, but `test_bit(META_dirty, &mp->flag)` *is* true, and `test_bit(META_sync, &mp->flag)` *is* false? Here, `metapage_write_one` isn't called, and the folio stays locked. Then `drop_metapage` will return early, and the `folio_unlock(folio)` is executed. This is exactly the same as the original code path, so no issues there. Now, what if `folio->mapping` *is* NULL, and `test_bit(META_dirty, &mp->flag)` was true? We clear the `META_dirty` bit. Then `drop_metapage` is called. Because the `META_dirty` flag is cleared, `drop_metapage` *will* proceed to call `remove_metapage` and `free_metapage` (assuming `mp->count` is 0, `mp->nohomeok` is 0, and `META_io` is not set). Yes, this is the expected behavior, because a truncated folio implies the metapage should be dropped, and the flags are appropriately cleared. Let's double-check `drop_metapage` directly to be sure: I can see that if `META_dirty` is cleared, `drop_metapage` *does* free the metapage, as anticipated. This aligns perfectly with the root cause analysis: "Clearing the `META_dirty` flag is crucial because `drop_metapage()` checks this flag; if it remains set, `drop_metapage()` will return early, causing a permanent memory leak of the metapage structure." Excellent. Just to be thorough, I should also check if `folio->mapping` is dereferenced anywhere else. Let's look into that in `__get_metapage`. |
| 328/4 | 2026/03/16 16:31 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:__get_metapage] Results: map[SourceCode: 683: struct metapage *__get_metapage(struct inode *inode, unsigned long lblock,
684: unsigned int size, int absolute,
685: unsigned long new)
686: {
687: int l2BlocksPerPage;
688: int l2bsize;
689: struct address_space *mapping;
690: struct metapage *mp = NULL;
691: struct folio *folio;
692: unsigned long page_index;
693: unsigned long page_offset;
694:
695: jfs_info("__get_metapage: ino = %ld, lblock = 0x%lx, abs=%d",
696: inode->i_ino, lblock, absolute);
697:
698: l2bsize = inode->i_blkbits;
699: l2BlocksPerPage = PAGE_SHIFT - l2bsize;
700: page_index = lblock >> l2BlocksPerPage;
701: page_offset = (lblock - (page_index << l2BlocksPerPage)) << l2bsize;
702: if ((page_offset + size) > PAGE_SIZE) {
703: jfs_err("MetaData crosses page boundary!!");
704: jfs_err("lblock = %lx, size = %d", lblock, size);
705: dump_stack();
706: return NULL;
707: }
708: if (absolute)
709: mapping = JFS_SBI(inode->i_sb)->direct_inode->i_mapping;
710: else {
711: /*
712: * If an nfs client tries to read an inode that is larger
713: * than any existing inodes, we may try to read past the
714: * end of the inode map
715: */
716: if ((lblock << inode->i_blkbits) >= inode->i_size)
717: return NULL;
718: mapping = inode->i_mapping;
719: }
720:
721: if (new && (PSIZE == PAGE_SIZE)) {
722: folio = filemap_grab_folio(mapping, page_index);
723: if (IS_ERR(folio)) {
724: jfs_err("filemap_grab_folio failed!");
725: return NULL;
726: }
727: folio_mark_uptodate(folio);
728: } else {
729: folio = read_mapping_folio(mapping, page_index, NULL);
730: if (IS_ERR(folio)) {
731: jfs_err("read_mapping_page failed!");
732: return NULL;
733: }
734: folio_lock(folio);
735: }
736:
737: mp = folio_to_mp(folio, page_offset);
738: if (mp) {
739: if (mp->logical_size != size) {
740: jfs_error(inode->i_sb,
741: "get_mp->logical_size != size\n");
742: jfs_err("logical_size = %d, size = %d",
743: mp->logical_size, size);
744: dump_stack();
745: goto unlock;
746: }
747: mp->count++;
748: lock_metapage(mp);
749: if (test_bit(META_discard, &mp->flag)) {
750: if (!new) {
751: jfs_error(inode->i_sb,
752: "using a discarded metapage\n");
753: discard_metapage(mp);
754: goto unlock;
755: }
756: clear_bit(META_discard, &mp->flag);
757: }
758: } else {
759: INCREMENT(mpStat.pagealloc);
760: mp = alloc_metapage(GFP_NOFS);
761: if (!mp)
762: goto unlock;
763: mp->folio = folio;
764: mp->sb = inode->i_sb;
765: mp->flag = 0;
766: mp->xflag = COMMIT_PAGE;
767: mp->count = 1;
768: mp->nohomeok = 0;
769: mp->logical_size = size;
770: mp->data = folio_address(folio) + page_offset;
771: mp->index = lblock;
772: if (unlikely(insert_metapage(folio, mp))) {
773: free_metapage(mp);
774: goto unlock;
775: }
776: lock_metapage(mp);
777: }
778:
779: if (new) {
780: jfs_info("zeroing mp = 0x%p", mp);
781: memset(mp->data, 0, PSIZE);
782: }
783:
784: folio_unlock(folio);
785: jfs_info("__get_metapage: returning = 0x%p data = 0x%p", mp, mp->data);
786: return mp;
787:
788: unlock:
789: folio_unlock(folio);
790: return NULL;
791: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 329/4 | 2026/03/16 16:31 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 57537 output: 49 thoughts: 53 Thoughts: Here's my summary, as the expert: **Metapage Dereferencing Inquiry** Okay, let's see, I'm diving into this metapage interaction... In `__get_metapage`, I notice that `folio->mapping` isn't being dereferenced directly. Hmm, that's interesting. I'd expect to see a dereference there, or maybe later in the call chain. Wait a minute, I'm remembering that `folio_to_mp` also doesn't dereference `folio->mapping`! I see. So it's not done in the obvious places. This is becoming curious. Alright, what's going on in `metapage_read_folio`? I'm betting that's where the critical piece of the puzzle lies. I need to trace through *that* function now, to understand how the `folio->mapping` information is actually being used. This could be indicative of a clever optimization or perhaps a subtle bug. I'll get to the bottom of this. |
| 330/4 | 2026/03/16 16:31 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:metapage_read_folio] Results: map[SourceCode: 564: static int metapage_read_folio(struct file *fp, struct folio *folio)
565: {
566: struct inode *inode = folio->mapping->host;
567: struct bio *bio = NULL;
568: int block_offset;
569: int blocks_per_page = i_blocks_per_folio(inode, folio);
570: sector_t page_start; /* address of page in fs blocks */
571: sector_t pblock;
572: int xlen;
573: unsigned int len;
574: int offset;
575:
576: BUG_ON(!folio_test_locked(folio));
577: page_start = folio_pos(folio) >> inode->i_blkbits;
578:
579: block_offset = 0;
580: while (block_offset < blocks_per_page) {
581: xlen = blocks_per_page - block_offset;
582: pblock = metapage_get_blocks(inode, page_start + block_offset,
583: &xlen);
584: if (pblock) {
585: if (!folio->private)
586: insert_metapage(folio, NULL);
587: inc_io(folio);
588: if (bio)
589: submit_bio(bio);
590:
591: bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_READ,
592: GFP_NOFS);
593: bio->bi_iter.bi_sector =
594: pblock << (inode->i_blkbits - 9);
595: bio->bi_end_io = metapage_read_end_io;
596: bio->bi_private = folio;
597: len = xlen << inode->i_blkbits;
598: offset = block_offset << inode->i_blkbits;
599: bio_add_folio_nofail(bio, folio, len, offset);
600: block_offset += xlen;
601: } else
602: block_offset++;
603: }
604: if (bio)
605: submit_bio(bio);
606: else
607: folio_unlock(folio);
608:
609: return 0;
610: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 331/4 | 2026/03/16 16:31 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 58606 output: 49 thoughts: 64 Thoughts: Okay, here's my take on those thoughts, framed as my own mental processing: **Examining Folio Operations and Mapping Guarantees** Alright, I'm digging into these folio-related functions, specifically `metapage_read_folio`. I see that they're using `folio->mapping->host`. That makes sense; it needs the backing storage information. Now, the crucial part is the context. `metapage_read_folio` is explicitly an address space operation. Because of that, the kernel *guarantees* that `folio->mapping` will *never* be NULL when this function is called. We are operating in a well-defined context, and the mapping should be valid. The assumptions hold. Now, my mind immediately jumps to the counterpart, `metapage_release_folio`. Does this operation have the same kind of implicit or explicit guarantee? I need to verify that `folio->mapping` is also guaranteed to be non-NULL in the context of `metapage_release_folio`. Or are there cases, even corner cases, where it could be NULL in that function? I need to confirm the conditions where `metapage_release_folio` is called to make sure that assumption holds. I should check the code for any potential scenarios where it might be null. Any subtle edge cases could be lurking there. Time to investigate! |
| 332/4 | 2026/03/16 16:31 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:metapage_release_folio] Results: map[SourceCode: 612: static bool metapage_release_folio(struct folio *folio, gfp_t gfp_mask)
613: {
614: struct metapage *mp;
615: bool ret = true;
616: int offset;
617:
618: for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
619: mp = folio_to_mp(folio, offset);
620:
621: if (!mp)
622: continue;
623:
624: jfs_info("metapage_release_folio: mp = 0x%p", mp);
625: if (mp->count || mp->nohomeok ||
626: test_bit(META_dirty, &mp->flag)) {
627: jfs_info("count = %ld, nohomeok = %d", mp->count,
628: mp->nohomeok);
629: ret = false;
630: continue;
631: }
632: if (mp->lsn)
633: remove_from_logsync(mp);
634: remove_metapage(folio, mp);
635: INCREMENT(mpStat.pagefree);
636: free_metapage(mp);
637: }
638: return ret;
639: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 333/4 | 2026/03/16 16:31 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 59448 output: 50 thoughts: 399 Thoughts: **Analyzing Metapage Release and Folio Invalidations**
Okay, so I'm thinking about this `metapage_release_folio` function and the recent change to `remove_metapage`. It looks like we've swapped out the use of `folio->mapping->host->i_blkbits` for `mp->sb->s_blocksize_bits` in `remove_metapage`. That's smart. Now, even if the `folio->mapping` is NULL, which I suspect can happen, the code is safe.
I'm remembering that `metapage_release_folio` is called by `metapage_invalidate_folio`, specifically when a folio is being invalidated. Now, does that mean `folio->mapping` might be NULL at that point? The code snippet for `metapage_invalidate_folio` raises that question:
```c
static void metapage_invalidate_folio(struct folio *folio, size_t offset,
size_t length)
{
BUG_ON(offset || length < folio_size(folio));
BUG_ON(folio_test_writeback(folio));
metapage_release_folio(folio, 0);
}
```
The thing is, `invalidate_folio` itself is called with `folio->mapping` set. But it dawns on me that `release_folio` could be called in other scenarios like `try_to_free_buffers` or similar, where the folio might have been truncated, and `folio->mapping` *could* be NULL. So yes, the change to `remove_metapage` is definitely a good move. We've eliminated that potential for a nasty dereference.
Now, let's make sure that `mp->sb` is always valid within `remove_metapage`. Thinking it through, `mp` is allocated within `__get_metapage`, and its `mp->sb` field is set to the inode's superblock (`inode->i_sb`). Crucially, that `mp->sb` field is never cleared until `free_metapage` is called. So, barring any bugs there, `mp->sb` *should* always be valid, which is reassuring.
Just to be thorough, I should look for other spots where `folio->mapping` is used. I'll need to check the `__metapage_migrate_folio` function to make sure nothing fishy is happening there.
|
| 334/4 | 2026/03/16 16:31 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/jfs/jfs_metapage.c IncludeLines:true Name:__metapage_migrate_folio] Results: map[SourceCode: 232: static int __metapage_migrate_folio(struct address_space *mapping,
233: struct folio *dst, struct folio *src,
234: enum migrate_mode mode)
235: {
236: struct metapage *mp;
237: int page_offset;
238: int rc;
239:
240: mp = folio_to_mp(src, 0);
241: if (metapage_locked(mp))
242: return -EAGAIN;
243:
244: rc = filemap_migrate_folio(mapping, dst, src, mode);
245: if (rc)
246: return rc;
247:
248: if (unlikely(insert_metapage(dst, mp)))
249: return -EAGAIN;
250:
251: page_offset = mp->data - folio_address(src);
252: mp->data = folio_address(dst) + page_offset;
253: mp->folio = dst;
254: remove_metapage(src, mp);
255:
256: return 0;
257: }
SourceFile:fs/jfs/jfs_metapage.c] |
| 335/4 | 2026/03/16 16:31 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 60825 output: 17 thoughts: 916 Thoughts: **Analyzing Metapage Migration and Potential Null Pointer Dereferences** Okay, so I'm looking closely at how `__metapage_migrate_folio` works. It calls `insert_metapage` and `remove_metapage`, which both seem to operate on a `metapage` struct (`mp`). My initial thought was, "Hey, `mp` is valid, so `mp->sb` must also be valid". However, that assumption needs re-evaluation, especially given the context of metapage management within the JFS filesystem. Then I recalled a scenario where `insert_metapage` is called with a NULL `mp`. Specifically, in `metapage_read_folio`, the code checks if `folio->private` is null, and if so, calls `insert_metapage` with `NULL`. Luckily, I know that `insert_metapage` handles this case gracefully. If `mp` is indeed NULL, it correctly avoids dereferencing `mp->sb` and instead initializes the `meta_anchor` structure, attaching it to the folio. Phew, no crash there. Now, I'm thinking about other potential pitfalls. I need to make sure `folio->mapping` isn't used in a place where it might be NULL, leading to a crash. I know `metapage_write_folio` uses `folio->mapping->host`, which would definitely crash if `folio->mapping` is NULL. I went back and re-examined how `metapage_write_folio` is called. It's called from `metapage_writepages` and `metapage_write_one`. `metapage_writepages` uses `writeback_iter`, which handles NULL folio mappings. `metapage_write_one` is called from `force_metapage` and `release_metapage`. I remember adding checks in both to ensure `metapage_write_one` isn't called if `folio->mapping` is NULL. Good. Looks like I've covered those cases. So, let's summarize the changes I've analyzed: 1. `insert_metapage` was changed to correctly use `mp->sb->s_blocksize_bits` instead of relying on `folio->mapping->host->i_blkbits`. 2. `remove_metapage` was also modified to similarly use `mp->sb->s_blocksize_bits`. 3. `force_metapage` now checks `folio->mapping` before calling `metapage_write_one`. 4. `release_metapage` now also checks `folio->mapping` before calling `metapage_write_one`. I believe I've covered all the critical areas regarding NULL checks, but it's always good practice to double-check. I will now examine `fs/jfs/jfs_metapage.c` to look for other uses of `folio->mapping` to be absolutely certain there are no lingering vulnerabilities. |
| 336/4 | 2026/03/16 16:32 | tool | grepper |
0mArgs:map[Expression:folio->mapping] Results: map[Output:
Full output is too long, showing 500 out of 2371 lines.
Use more precise expression if possible.
[Documentation/mm/process_addrs.rst=42=Terminology
--
Documentation/mm/process_addrs.rst-56- :c:struct:`!struct address_space` or :c:struct:`!struct anon_vma` object
Documentation/mm/process_addrs.rst:57: (reachable from a folio via :c:member:`!folio->mapping`). VMAs must be stabilised via
Documentation/mm/process_addrs.rst-58- :c:func:`!anon_vma_[try]lock_read` or :c:func:`!anon_vma_[try]lock_write` for
--
arch/s390/kernel/uv.c=296=int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio)
--
arch/s390/kernel/uv.c-341- if (!folio_test_dirty(folio) || folio_test_anon(folio) ||
arch/s390/kernel/uv.c:342: !folio->mapping || !mapping_can_writeback(folio->mapping)) {
arch/s390/kernel/uv.c-343- folio_unlock(folio);
--
arch/s390/kernel/uv.c-353- */
arch/s390/kernel/uv.c:354: mapping = folio->mapping;
arch/s390/kernel/uv.c-355- lstart = folio_pos(folio);
--
drivers/dax/device.c=83=static void dax_set_mapping(struct vm_fault *vmf, unsigned long pfn,
--
drivers/dax/device.c-100-
drivers/dax/device.c:101: if (folio->mapping)
drivers/dax/device.c-102- continue;
drivers/dax/device.c-103-
drivers/dax/device.c:104: folio->mapping = filp->f_mapping;
drivers/dax/device.c-105- folio->index = pgoff + i;
--
fs/affs/file.c=528=static int affs_do_read_folio_ofs(struct folio *folio, size_t to, int create)
fs/affs/file.c-529-{
fs/affs/file.c:530: struct inode *inode = folio->mapping->host;
fs/affs/file.c-531- struct super_block *sb = inode->i_sb;
--
fs/affs/file.c=631=static int affs_read_folio_ofs(struct file *file, struct folio *folio)
fs/affs/file.c-632-{
fs/affs/file.c:633: struct inode *inode = folio->mapping->host;
fs/affs/file.c-634- size_t to;
--
fs/affs/symlink.c=14=static int affs_symlink_read_folio(struct file *file, struct folio *folio)
--
fs/affs/symlink.c-16- struct buffer_head *bh;
fs/affs/symlink.c:17: struct inode *inode = folio->mapping->host;
fs/affs/symlink.c-18- char *link = folio_address(folio);
--
fs/befs/linuxvfs.c=471=static int befs_symlink_read_folio(struct file *unused, struct folio *folio)
fs/befs/linuxvfs.c-472-{
fs/befs/linuxvfs.c:473: struct inode *inode = folio->mapping->host;
fs/befs/linuxvfs.c-474- struct super_block *sb = inode->i_sb;
--
fs/btrfs/defrag.c=845=static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t index)
--
fs/btrfs/defrag.c-905- */
fs/btrfs/defrag.c:906: if (folio->mapping != mapping || !folio->private) {
fs/btrfs/defrag.c-907- folio_unlock(folio);
--
fs/btrfs/defrag.c-919- folio_lock(folio);
fs/btrfs/defrag.c:920: if (folio->mapping != mapping || !folio->private) {
fs/btrfs/defrag.c-921- folio_unlock(folio);
--
fs/btrfs/extent_io.c=302=static noinline int lock_delalloc_folios(struct inode *inode,
--
fs/btrfs/extent_io.c-330- folio_lock(folio);
fs/btrfs/extent_io.c:331: if (!folio_test_dirty(folio) || folio->mapping != mapping) {
fs/btrfs/extent_io.c-332- folio_unlock(folio);
--
fs/btrfs/extent_io.c=517=static void end_bbio_data_write(struct btrfs_bio *bbio)
--
fs/btrfs/extent_io.c-543- if (error)
fs/btrfs/extent_io.c:544: mapping_set_error(folio->mapping, error);
fs/btrfs/extent_io.c-545- btrfs_folio_clear_writeback(fs_info, folio, start, len);
--
fs/btrfs/extent_io.c=869=static int attach_extent_buffer_folio(struct extent_buffer *eb,
--
fs/btrfs/extent_io.c-881- */
fs/btrfs/extent_io.c:882: if (folio->mapping)
fs/btrfs/extent_io.c:883: lockdep_assert_held(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-884-
--
fs/btrfs/extent_io.c=908=int set_folio_extent_mapped(struct folio *folio)
--
fs/btrfs/extent_io.c-911-
fs/btrfs/extent_io.c:912: ASSERT(folio->mapping);
fs/btrfs/extent_io.c-913-
--
fs/btrfs/extent_io.c=926=void clear_folio_extent_mapped(struct folio *folio)
--
fs/btrfs/extent_io.c-929-
fs/btrfs/extent_io.c:930: ASSERT(folio->mapping);
fs/btrfs/extent_io.c-931-
--
fs/btrfs/extent_io.c=997=static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
--
fs/btrfs/extent_io.c-1000-{
fs/btrfs/extent_io.c:1001: struct inode *inode = folio->mapping->host;
fs/btrfs/extent_io.c-1002- struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
--
fs/btrfs/extent_io.c=1336=int btrfs_read_folio(struct file *file, struct folio *folio)
fs/btrfs/extent_io.c-1337-{
fs/btrfs/extent_io.c:1338: struct inode *vfs_inode = folio->mapping->host;
fs/btrfs/extent_io.c-1339- struct btrfs_inode *inode = BTRFS_I(vfs_inode);
--
fs/btrfs/extent_io.c=1835=static int extent_writepage(struct folio *folio, struct btrfs_bio_ctrl *bio_ctrl)
fs/btrfs/extent_io.c-1836-{
fs/btrfs/extent_io.c:1837: struct btrfs_inode *inode = BTRFS_I(folio->mapping->host);
fs/btrfs/extent_io.c-1838- struct btrfs_fs_info *fs_info = inode->root->fs_info;
--
fs/btrfs/extent_io.c-1913- if (ret < 0)
fs/btrfs/extent_io.c:1914: mapping_set_error(folio->mapping, ret);
fs/btrfs/extent_io.c-1915- /*
--
fs/btrfs/extent_io.c=2455=static int extent_write_cache_pages(struct address_space *mapping,
--
fs/btrfs/extent_io.c-2529- * the folio lock: the folio may be truncated or
fs/btrfs/extent_io.c:2530: * invalidated (changing folio->mapping to NULL).
fs/btrfs/extent_io.c-2531- */
--
fs/btrfs/extent_io.c-2536-
fs/btrfs/extent_io.c:2537: if (unlikely(folio->mapping != mapping)) {
fs/btrfs/extent_io.c-2538- folio_unlock(folio);
--
fs/btrfs/extent_io.c=2915=static bool folio_range_has_eb(struct folio *folio)
--
fs/btrfs/extent_io.c-2918-
fs/btrfs/extent_io.c:2919: lockdep_assert_held(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-2920-
--
fs/btrfs/extent_io.c=2929=static void detach_extent_buffer_folio(const struct extent_buffer *eb, struct folio *folio)
--
fs/btrfs/extent_io.c-2931- struct btrfs_fs_info *fs_info = eb->fs_info;
fs/btrfs/extent_io.c:2932: struct address_space *mapping = folio->mapping;
fs/btrfs/extent_io.c-2933- const bool mapped = !test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags);
--
fs/btrfs/extent_io.c=3252=static struct extent_buffer *grab_extent_buffer(struct btrfs_fs_info *fs_info,
--
fs/btrfs/extent_io.c-3256-
fs/btrfs/extent_io.c:3257: lockdep_assert_held(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-3258-
--
fs/btrfs/extent_io.c=3712=static void btree_clear_folio_dirty_tag(struct folio *folio)
--
fs/btrfs/extent_io.c-3715- ASSERT(folio_test_locked(folio));
fs/btrfs/extent_io.c:3716: xa_lock_irq(&folio->mapping->i_pages);
fs/btrfs/extent_io.c-3717- if (!folio_test_dirty(folio))
fs/btrfs/extent_io.c:3718: __xa_clear_mark(&folio->mapping->i_pages, folio->index,
fs/btrfs/extent_io.c-3719- PAGECACHE_TAG_DIRTY);
fs/btrfs/extent_io.c:3720: xa_unlock_irq(&folio->mapping->i_pages);
fs/btrfs/extent_io.c-3721-}
--
fs/btrfs/extent_io.c=4479=static int try_release_subpage_extent_buffer(struct folio *folio)
--
fs/btrfs/extent_io.c-4527- */
fs/btrfs/extent_io.c:4528: spin_lock(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-4529- if (!folio_test_private(folio))
--
fs/btrfs/extent_io.c-4532- ret = 0;
fs/btrfs/extent_io.c:4533: spin_unlock(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-4534- return ret;
--
fs/btrfs/extent_io.c=4537=int try_release_extent_buffer(struct folio *folio)
--
fs/btrfs/extent_io.c-4547- */
fs/btrfs/extent_io.c:4548: spin_lock(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-4549- if (!folio_test_private(folio)) {
fs/btrfs/extent_io.c:4550: spin_unlock(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-4551- return 1;
--
fs/btrfs/extent_io.c-4564- spin_unlock(&eb->refs_lock);
fs/btrfs/extent_io.c:4565: spin_unlock(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-4566- return 0;
fs/btrfs/extent_io.c-4567- }
fs/btrfs/extent_io.c:4568: spin_unlock(&folio->mapping->i_private_lock);
fs/btrfs/extent_io.c-4569-
--
fs/btrfs/file.c=798=static int prepare_uptodate_folio(struct inode *inode, struct folio *folio, u64 pos,
--
fs/btrfs/file.c-830- */
fs/btrfs/file.c:831: if (folio->mapping != inode->i_mapping || !folio_test_private(folio)) {
fs/btrfs/file.c-832- folio_unlock(folio);
--
fs/btrfs/file.c=1851=static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
--
fs/btrfs/file.c-1921-
fs/btrfs/file.c:1922: if ((folio->mapping != inode->vfs_inode.i_mapping) ||
fs/btrfs/file.c-1923- (page_start >= size)) {
--
fs/btrfs/free-space-cache.c=446=static int io_ctl_prepare_pages(struct btrfs_io_ctl *io_ctl, bool uptodate)
--
fs/btrfs/free-space-cache.c-475- folio_lock(folio);
fs/btrfs/free-space-cache.c:476: if (folio->mapping != inode->i_mapping) {
fs/btrfs/free-space-cache.c-477- btrfs_err(BTRFS_I(inode)->root->fs_info,
--
fs/btrfs/inode.c=2837=static void btrfs_writepage_fixup_worker(struct btrfs_work *work)
--
fs/btrfs/inode.c-2862- * Before we queued this fixup, we took a reference on the folio.
fs/btrfs/inode.c:2863: * folio->mapping may go NULL, but it shouldn't be moved to a different
fs/btrfs/inode.c-2864- * address space.
fs/btrfs/inode.c-2865- */
fs/btrfs/inode.c:2866: if (!folio->mapping || !folio_test_dirty(folio) ||
fs/btrfs/inode.c-2867- !folio_test_checked(folio)) {
--
fs/btrfs/inode.c-2943- */
fs/btrfs/inode.c:2944: mapping_set_error(folio->mapping, ret);
fs/btrfs/inode.c-2945- btrfs_mark_ordered_io_finished(inode, folio, page_start,
--
fs/btrfs/inode.c=2973=int btrfs_writepage_cow_fixup(struct folio *folio)
fs/btrfs/inode.c-2974-{
fs/btrfs/inode.c:2975: struct inode *inode = folio->mapping->host;
fs/btrfs/inode.c-2976- struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
--
fs/btrfs/inode.c-3016- * takes place outside of the folio lock, and we can't trust
fs/btrfs/inode.c:3017: * folio->mapping outside of the folio lock.
fs/btrfs/inode.c-3018- */
--
fs/btrfs/inode.c=4973=static int truncate_block_zero_beyond_eof(struct btrfs_inode *inode, u64 start)
--
fs/btrfs/inode.c-4990- folio_lock(folio);
fs/btrfs/inode.c:4991: if (folio->mapping != mapping) {
fs/btrfs/inode.c-4992- folio_unlock(folio);
--
fs/btrfs/inode.c=5036=int btrfs_truncate_block(struct btrfs_inode *inode, u64 offset, u64 start, u64 end)
--
fs/btrfs/inode.c-5134- folio_lock(folio);
fs/btrfs/inode.c:5135: if (folio->mapping != mapping) {
fs/btrfs/inode.c-5136- folio_unlock(folio);
--
fs/btrfs/ordered-data.c=350=static bool can_finish_ordered_extent(struct btrfs_ordered_extent *ordered,
--
fs/btrfs/ordered-data.c-359- if (folio) {
fs/btrfs/ordered-data.c:360: ASSERT(folio->mapping);
fs/btrfs/ordered-data.c-361- ASSERT(folio_pos(folio) <= file_offset);
--
fs/btrfs/relocation.c=2778=static int relocate_one_folio(struct reloc_control *rc,
--
fs/btrfs/relocation.c-2831- }
fs/btrfs/relocation.c:2832: if (folio->mapping != inode->i_mapping) {
fs/btrfs/relocation.c-2833- folio_unlock(folio);
--
fs/btrfs/send.c=5198=static int put_file_data(struct send_ctx *sctx, u64 offset, u32 len)
--
fs/btrfs/send.c-5249- }
fs/btrfs/send.c:5250: if (folio->mapping != mapping) {
fs/btrfs/send.c-5251- folio_unlock(folio);
--
fs/btrfs/subpage.c=66=int btrfs_attach_folio_state(const struct btrfs_fs_info *fs_info,
--
fs/btrfs/subpage.c-78- */
fs/btrfs/subpage.c:79: if (folio->mapping)
fs/btrfs/subpage.c-80- ASSERT(folio_test_locked(folio));
--
fs/btrfs/subpage.c=148=void btrfs_folio_inc_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *folio)
--
fs/btrfs/subpage.c-154-
fs/btrfs/subpage.c:155: ASSERT(folio_test_private(folio) && folio->mapping);
fs/btrfs/subpage.c:156: lockdep_assert_held(&folio->mapping->i_private_lock);
fs/btrfs/subpage.c-157-
--
fs/btrfs/subpage.c=162=void btrfs_folio_dec_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *folio)
--
fs/btrfs/subpage.c-168-
fs/btrfs/subpage.c:169: ASSERT(folio_test_private(folio) && folio->mapping);
fs/btrfs/subpage.c:170: lockdep_assert_held(&folio->mapping->i_private_lock);
fs/btrfs/subpage.c-171-
--
fs/btrfs/subpage.c=177=static void btrfs_subpage_assert(const struct btrfs_fs_info *fs_info,
--
fs/btrfs/subpage.c-187- */
fs/btrfs/subpage.c:188: if (folio->mapping)
fs/btrfs/subpage.c-189- ASSERT(folio_pos(folio) <= start &&
--
fs/btrfs/subpage.h=107=static inline bool btrfs_is_subpage(const struct btrfs_fs_info *fs_info,
--
fs/btrfs/subpage.h-109-{
fs/btrfs/subpage.h:110: if (folio->mapping && folio->mapping->host)
fs/btrfs/subpage.h:111: ASSERT(is_data_inode(BTRFS_I(folio->mapping->host)));
fs/btrfs/subpage.h-112- return fs_info->sectorsize < folio_size(folio);
--
fs/buffer.c=349=static void end_buffer_async_read_io(struct buffer_head *bh, int uptodate)
fs/buffer.c-350-{
fs/buffer.c:351: struct inode *inode = bh->b_folio->mapping->host;
fs/buffer.c-352- bool decrypt = fscrypt_inode_uses_fs_layer_crypto(inode);
--
fs/buffer.c=675=void mark_buffer_dirty_inode(struct buffer_head *bh, struct inode *inode)
--
fs/buffer.c-677- struct address_space *mapping = inode->i_mapping;
fs/buffer.c:678: struct address_space *buffer_mapping = bh->b_folio->mapping;
fs/buffer.c-679-
--
fs/buffer.c=1117=__getblk_slow(struct block_device *bdev, sector_t block,
--
fs/buffer.c-1174- *
fs/buffer.c:1175: * mark_buffer_dirty() is atomic. It takes bh->b_folio->mapping->i_private_lock,
fs/buffer.c-1176- * i_pages lock and mapping->host->i_lock.
--
fs/buffer.c=1178=void mark_buffer_dirty(struct buffer_head *bh)
--
fs/buffer.c-1200- if (!folio_test_set_dirty(folio)) {
fs/buffer.c:1201: mapping = folio->mapping;
fs/buffer.c-1202- if (mapping)
--
fs/buffer.c=1211=void mark_buffer_write_io_error(struct buffer_head *bh)
--
fs/buffer.c-1214- /* FIXME: do we need to set this in both places? */
fs/buffer.c:1215: if (bh->b_folio && bh->b_folio->mapping)
fs/buffer.c:1216: mapping_set_error(bh->b_folio->mapping, -EIO);
fs/buffer.c-1217- if (bh->b_assoc_map)
--
fs/buffer.c=1245=void __bforget(struct buffer_head *bh)
--
fs/buffer.c-1248- if (bh->b_assoc_map) {
fs/buffer.c:1249: struct address_space *buffer_mapping = bh->b_folio->mapping;
fs/buffer.c-1250-
--
fs/buffer.c=1682=struct buffer_head *create_empty_buffers(struct folio *folio,
--
fs/buffer.c-1696-
fs/buffer.c:1697: spin_lock(&folio->mapping->i_private_lock);
fs/buffer.c-1698- if (folio_test_uptodate(folio) || folio_test_dirty(folio)) {
--
fs/buffer.c-1708- folio_attach_private(folio, head);
fs/buffer.c:1709: spin_unlock(&folio->mapping->i_private_lock);
fs/buffer.c-1710-
--
fs/buffer.c=1833=int __block_write_full_folio(struct inode *inode, struct folio *folio,
--
fs/buffer.c-1978- BUG_ON(folio_test_writeback(folio));
fs/buffer.c:1979: mapping_set_error(folio->mapping, err);
fs/buffer.c-1980- folio_start_writeback(folio);
--
fs/buffer.c=2107=int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len,
--
fs/buffer.c-2111- size_t to = from + len;
fs/buffer.c:2112: struct inode *inode = folio->mapping->host;
fs/buffer.c-2113- size_t block_start, block_end;
--
fs/buffer.c=2386=int block_read_full_folio(struct folio *folio, get_block_t *get_block)
fs/buffer.c-2387-{
fs/buffer.c:2388: struct inode *inode = folio->mapping->host;
fs/buffer.c-2389- sector_t iblock, lblock;
--
fs/buffer.c=2613=int block_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf,
--
fs/buffer.c-2623- size = i_size_read(inode);
fs/buffer.c:2624: if ((folio->mapping != inode->i_mapping) ||
fs/buffer.c-2625- (folio_pos(folio) >= size)) {
--
fs/buffer.c=2725=int block_write_full_folio(struct folio *folio, struct writeback_control *wbc,
--
fs/buffer.c-2727-{
fs/buffer.c:2728: struct inode * const inode = folio->mapping->host;
fs/buffer.c-2729- loff_t i_size = i_size_read(inode);
--
fs/buffer.c=2938=bool try_to_free_buffers(struct folio *folio)
fs/buffer.c-2939-{
fs/buffer.c:2940: struct address_space * const mapping = folio->mapping;
fs/buffer.c-2941- struct buffer_head *buffers_to_free = NULL;
--
fs/ceph/addr.c=140=static void ceph_invalidate_folio(struct folio *folio, size_t offset,
--
fs/ceph/addr.c-142-{
fs/ceph/addr.c:143: struct inode *inode = folio->mapping->host;
fs/ceph/addr.c-144- struct ceph_client *cl = ceph_inode_to_client(inode);
--
fs/ceph/addr.c=717=static int write_folio_nounlock(struct folio *folio,
--
fs/ceph/addr.c-720- struct page *page = &folio->page;
fs/ceph/addr.c:721: struct inode *inode = folio->mapping->host;
fs/ceph/addr.c-722- struct ceph_inode_info *ci = ceph_inode(inode);
--
fs/ceph/addr.c=1134=int ceph_check_page_before_write(struct address_space *mapping,
--
fs/ceph/addr.c-1144- /* only dirty folios, or our accounting breaks */
fs/ceph/addr.c:1145: if (unlikely(!folio_test_dirty(folio) || folio->mapping != mapping)) {
fs/ceph/addr.c-1146- doutc(cl, "!dirty or !mapping %p\n", folio);
--
fs/ceph/addr.c=1781=ceph_find_incompatible(struct folio *folio)
fs/ceph/addr.c-1782-{
fs/ceph/addr.c:1783: struct inode *inode = folio->mapping->host;
fs/ceph/addr.c-1784- struct ceph_client *cl = ceph_inode_to_client(inode);
--
fs/coda/symlink.c=23=static int coda_symlink_filler(struct file *file, struct folio *folio)
fs/coda/symlink.c-24-{
fs/coda/symlink.c:25: struct inode *inode = folio->mapping->host;
fs/coda/symlink.c-26- int error;
--
fs/cramfs/inode.c=821=static int cramfs_read_folio(struct file *file, struct folio *folio)
fs/cramfs/inode.c-822-{
fs/cramfs/inode.c:823: struct inode *inode = folio->mapping->host;
fs/cramfs/inode.c-824- u32 maxblock;
--
fs/crypto/crypto.c=172=struct page *fscrypt_encrypt_pagecache_blocks(struct folio *folio,
--
fs/crypto/crypto.c-174-{
fs/crypto/crypto.c:175: const struct inode *inode = folio->mapping->host;
fs/crypto/crypto.c-176- const struct fscrypt_inode_info *ci = fscrypt_get_inode_info_raw(inode);
--
fs/crypto/crypto.c=255=int fscrypt_decrypt_pagecache_blocks(struct folio *folio, size_t len,
--
fs/crypto/crypto.c-257-{
fs/crypto/crypto.c:258: const struct inode *inode = folio->mapping->host;
fs/crypto/crypto.c-259- const struct fscrypt_inode_info *ci = fscrypt_get_inode_info_raw(inode);
--
fs/dax.c=351=static inline bool dax_folio_is_shared(struct folio *folio)
fs/dax.c-352-{
fs/dax.c:353: return !folio->mapping && folio->share;
fs/dax.c-354-}
--
fs/dax.c=366=static void dax_folio_make_shared(struct folio *folio)
--
fs/dax.c-369- * folio is not currently shared so mark it as shared by clearing
fs/dax.c:370: * folio->mapping.
fs/dax.c-371- */
fs/dax.c:372: folio->mapping = NULL;
fs/dax.c-373-
--
fs/dax.c=381=static inline unsigned long dax_folio_put(struct folio *folio)
--
fs/dax.c-393-
fs/dax.c:394: folio->mapping = NULL;
fs/dax.c-395- order = folio_order(folio);
--
fs/dax.c-407-
fs/dax.c:408: new_folio->mapping = NULL;
fs/dax.c-409- /*
--
fs/dax.c=441=static void dax_associate_entry(void *entry, struct address_space *mapping,
--
fs/dax.c-451- index = linear_page_index(vma, address & ~(size - 1));
fs/dax.c:452: if (shared && (folio->mapping || dax_folio_is_shared(folio))) {
fs/dax.c:453: if (folio->mapping)
fs/dax.c-454- dax_folio_make_shared(folio);
--
fs/dax.c-459- } else {
fs/dax.c:460: WARN_ON_ONCE(folio->mapping);
fs/dax.c-461- dax_folio_init(entry);
fs/dax.c-462- folio = dax_to_folio(entry);
fs/dax.c:463: folio->mapping = mapping;
fs/dax.c-464- folio->index = index;
--
fs/dax.c=500=dax_entry_t dax_lock_folio(struct folio *folio)
--
fs/dax.c-504-
fs/dax.c:505: /* Ensure folio->mapping isn't freed while we look at it */
fs/dax.c-506- rcu_read_lock();
fs/dax.c-507- for (;;) {
fs/dax.c:508: struct address_space *mapping = READ_ONCE(folio->mapping);
fs/dax.c-509-
--
fs/dax.c-526- xas_lock_irq(&xas);
fs/dax.c:527: if (mapping != folio->mapping) {
fs/dax.c-528- xas_unlock_irq(&xas);
--
fs/dax.c=547=void dax_unlock_folio(struct folio *folio, dax_entry_t cookie)
fs/dax.c-548-{
fs/dax.c:549: struct address_space *mapping = folio->mapping;
fs/dax.c-550- XA_STATE(xas, &mapping->i_pages, folio->index);
--
fs/ecryptfs/crypto.c=350=int ecryptfs_encrypt_page(struct folio *folio)
--
fs/ecryptfs/crypto.c-359-
fs/ecryptfs/crypto.c:360: ecryptfs_inode = folio->mapping->host;
fs/ecryptfs/crypto.c-361- crypt_stat =
--
fs/ecryptfs/crypto.c=420=int ecryptfs_decrypt_page(struct folio *folio)
--
fs/ecryptfs/crypto.c-428-
fs/ecryptfs/crypto.c:429: ecryptfs_inode = folio->mapping->host;
fs/ecryptfs/crypto.c-430- crypt_stat =
--
fs/ecryptfs/mmap.c=91=ecryptfs_copy_up_encrypted_with_header(struct folio *folio,
--
fs/ecryptfs/mmap.c-116- rc = ecryptfs_read_xattr_region(
fs/ecryptfs/mmap.c:117: page_virt, folio->mapping->host);
fs/ecryptfs/mmap.c-118- strip_xattr_flag(page_virt + 16, crypt_stat);
--
fs/ecryptfs/mmap.c-138- (lower_offset & ~PAGE_MASK),
fs/ecryptfs/mmap.c:139: crypt_stat->extent_size, folio->mapping->host);
fs/ecryptfs/mmap.c-140- if (rc) {
--
fs/ecryptfs/mmap.c=163=static int ecryptfs_read_folio(struct file *file, struct folio *folio)
fs/ecryptfs/mmap.c-164-{
fs/ecryptfs/mmap.c:165: struct inode *inode = folio->mapping->host;
fs/ecryptfs/mmap.c-166- struct ecryptfs_crypt_stat *crypt_stat =
--
fs/ecryptfs/mmap.c=214=static int fill_zeros_to_end_of_page(struct folio *folio, unsigned int to)
fs/ecryptfs/mmap.c-215-{
fs/ecryptfs/mmap.c:216: struct inode *inode = folio->mapping->host;
fs/ecryptfs/mmap.c-217- int end_byte_in_page;
--
fs/efs/symlink.c=15=static int efs_symlink_read_folio(struct file *file, struct folio *folio)
--
fs/efs/symlink.c-18- struct buffer_head *bh;
fs/efs/symlink.c:19: struct inode *inode = folio->mapping->host;
fs/efs/symlink.c-20- efs_block_t size = inode->i_size;
--
fs/erofs/fscache.c=209=static int erofs_fscache_meta_read_folio(struct file *data, struct folio *folio)
fs/erofs/fscache.c-210-{
fs/erofs/fscache.c:211: struct erofs_fscache *ctx = folio->mapping->host->i_private;
fs/erofs/fscache.c-212- int ret = -ENOMEM;
--
fs/erofs/fscache.c-215-
fs/erofs/fscache.c:216: req = erofs_fscache_req_alloc(folio->mapping,
fs/erofs/fscache.c-217- folio_pos(folio), folio_size(folio));
--
]
] |
| 337/4 | 2026/03/16 16:32 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 72405 output: 48 thoughts: 255 Thoughts: **My Assessment of the Fix** Alright, I'm feeling good about this. This looks like a solid, complete solution to the problem. Let me walk through my understanding: First, in `release_metapage()`, the key is to handle the potential for `folio->mapping` being `NULL`. The fix explicitly checks for this. If `folio->mapping` *is* `NULL`, we then need to ensure we don't try to dereference it later. So, the code clears the `META_dirty` and `META_sync` flags. This makes perfect sense; if there's no mapping, there's nothing to write, and those flags would be incorrect. Then, it removes the metapage from the logsync list if necessary. This prevents us from calling `metapage_write_one()` on a truncated folio, which is, of course, where the NULL pointer dereference was happening. Importantly, it makes sure `drop_metapage()` properly frees the metapage. Good. Second, in `force_metapage()`, the same logic applies. We check for the `NULL` `folio->mapping` before calling `metapage_write_one()`. If it's `NULL`, we clear `META_dirty` and unlock the folio. Again, this avoids the potential dereference. Makes perfect sense. Finally, in `insert_metapage()` and `remove_metapage()`, the change to `folio->mapping->host->i_blkbits` is critical. It's cleverly done. By changing it to `mp->sb->s_blocksize_bits`, we avoid the dereference when `folio->mapping` might be `NULL`. This is especially relevant on architectures where `PAGE_SIZE` is greater than 4096 – a situation where this particular vulnerability becomes exposed. Good attention to detail on that. I'm confident this addresses the issue comprehensively. This should be a clean fix. Now I can formulate my final response. |
| 338/4 | 2026/03/16 16:32 | tool | codeexpert |
7mArgs:map[Question:Is there any other place in fs/jfs/jfs_metapage.c where folio->mapping is dereferenced without checking if it is NULL?] Results: map[Answer:The analysis of `fs/jfs/jfs_metapage.c` reveals several locations where `folio->mapping` is dereferenced without a prior NULL check. While folios in the page cache typically have a valid mapping, certain conditions (such as truncation or eviction) can lead to `folio->mapping` being NULL, potentially causing a kernel crash.
The following locations in `fs/jfs/jfs_metapage.c` dereference `folio->mapping` without checking for NULL:
1. **`insert_metapage()`** (Line 104, within the `#if MPS_PER_PAGE > 1` block):
```c
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
```
This dereferences `folio->mapping` to access the host inode's block bits.
2. **`remove_metapage()`** (Line 118, within the `#if MPS_PER_PAGE > 1` block):
```c
int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
```
Similar to `insert_metapage`, this dereferences `folio->mapping`.
3. **`last_write_complete()`** (Line 330):
```c
mapping_set_error(folio->mapping, err);
```
In the event of an I/O error (`status` is non-zero), `mapping_set_error` is called, which dereferences `folio->mapping`.
4. **`metapage_write_folio()`** (Line 355):
```c
struct inode *inode = folio->mapping->host;
```
The function immediately obtains the inode from the folio's mapping.
5. **`metapage_read_folio()`** (Line 466):
```c
struct inode *inode = folio->mapping->host;
```
Like the write path, the read path dereferences the mapping to get the inode.
6. **`metapage_write_one()`** (Line 625 and 645):
```c
struct address_space *mapping = folio->mapping;
...
ret = filemap_check_errors(mapping);
```
Although `folio->mapping` is first assigned to a local variable `mapping`, it is later passed to `filemap_check_errors()`, which dereferences it (specifically `mapping->flags`).
In summary, there are multiple places in `fs/jfs/jfs_metapage.c` where `folio->mapping` is assumed to be non-NULL. If these functions are called on a folio that has been detached from its address space, a NULL pointer dereference will occur.] |
| 339/5 | 2026/03/16 16:32 | agent | codeexpert |
7mModel:gemini-3-flash-preview Instruction: You are a capable Linux kernel developer tasked with researching complex questions about kernel source code. You will be given a concrete question, and need to provide a concrete answer. Use tools extensively while researching the question. Don't make assumptions, or rely on your previous knowledge about the kernel source code, use available tools to access the actual source code. Use all available sources of information: - kernel source code - documentation in the Documentation dir in the source tree - git commits descriptions, git blame Prefer calling several tools at the same time to save round-trips. Prompt: Is there any other place in fs/jfs/jfs_metapage.c where folio->mapping is dereferenced without checking if it is NULL? Reply: The analysis of `fs/jfs/jfs_metapage.c` reveals several locations where `folio->mapping` is dereferenced without a prior NULL check. While folios in the page cache typically have a valid mapping, certain conditions (such as truncation or eviction) can lead to `folio->mapping` being NULL, potentially causing a kernel crash.
The following locations in `fs/jfs/jfs_metapage.c` dereference `folio->mapping` without checking for NULL:
1. **`insert_metapage()`** (Line 104, within the `#if MPS_PER_PAGE > 1` block):
```c
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
```
This dereferences `folio->mapping` to access the host inode's block bits.
2. **`remove_metapage()`** (Line 118, within the `#if MPS_PER_PAGE > 1` block):
```c
int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
```
Similar to `insert_metapage`, this dereferences `folio->mapping`.
3. **`last_write_complete()`** (Line 330):
```c
mapping_set_error(folio->mapping, err);
```
In the event of an I/O error (`status` is non-zero), `mapping_set_error` is called, which dereferences `folio->mapping`.
4. **`metapage_write_folio()`** (Line 355):
```c
struct inode *inode = folio->mapping->host;
```
The function immediately obtains the inode from the folio's mapping.
5. **`metapage_read_folio()`** (Line 466):
```c
struct inode *inode = folio->mapping->host;
```
Like the write path, the read path dereferences the mapping to get the inode.
6. **`metapage_write_one()`** (Line 625 and 645):
```c
struct address_space *mapping = folio->mapping;
...
ret = filemap_check_errors(mapping);
```
Although `folio->mapping` is first assigned to a local variable `mapping`, it is later passed to `filemap_check_errors()`, which dereferences it (specifically `mapping->flags`).
In summary, there are multiple places in `fs/jfs/jfs_metapage.c` where `folio->mapping` is assumed to be non-NULL. If these functions are called on a folio that has been detached from its address space, a NULL pointer dereference will occur. |
| 340/6 | 2026/03/16 16:32 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 2286 output: 28 thoughts: 116 Thoughts: **Analyzing Potential NULL Pointer Dereferences in jfs_metapage.c** Okay, so the user's concerned about potential NULL pointer dereferences of `folio->mapping` within `fs/jfs/jfs_metapage.c`. Right, that's a classic area for errors, especially in complex filesystems. My immediate thought is, let's break this down systematically. First, I need to pinpoint every single instance where `folio->mapping` is actually *used* in this file. I'm imagining a quick `grep` or a similar search on the file contents will do the trick. That's step one: the locations. Then, for each of those locations, I've got to carefully examine the surrounding code. Are there any explicit checks for `folio->mapping` being NULL *before* it's dereferenced? That's the crux of the problem. If there aren't, then we have a potential vulnerability. It could be hidden within a function call that itself checks, but that also needs to be verified. I'm looking for a straightforward `if (folio->mapping != NULL)` or something equivalent. So, let's get the file contents, fire up the search, and let's get down to brass tacks. I'm going to carefully review each line, thinking about possible execution paths and all the implications. This should be interesting. |
| 341/6 | 2026/03/16 16:32 | tool | read-file |
0mArgs:map[File:fs/jfs/jfs_metapage.c] Results: map[Contents:// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright (C) International Business Machines Corp., 2000-2005
* Portions Copyright (C) Christoph Hellwig, 2001-2002
*/
#include <linux/blkdev.h>
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/bio.h>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/buffer_head.h>
#include <linux/mempool.h>
#include <linux/seq_file.h>
#include <linux/writeback.h>
#include <linux/migrate.h>
#include "jfs_incore.h"
#include "jfs_superblock.h"
#include "jfs_filsys.h"
#include "jfs_metapage.h"
#include "jfs_txnmgr.h"
#include "jfs_debug.h"
#ifdef CONFIG_JFS_STATISTICS
static struct {
uint pagealloc; /* # of page allocations */
uint pagefree; /* # of page frees */
uint lockwait; /* # of sleeping lock_metapage() calls */
} mpStat;
#endif
#define metapage_locked(mp) test_bit(META_locked, &(mp)->flag)
#define trylock_metapage(mp) test_and_set_bit_lock(META_locked, &(mp)->flag)
static inline void unlock_metapage(struct metapage *mp)
{
clear_bit_unlock(META_locked, &mp->flag);
wake_up(&mp->wait);
}
static inline void __lock_metapage(struct metapage *mp)
{
DECLARE_WAITQUEUE(wait, current);
INCREMENT(mpStat.lockwait);
add_wait_queue_exclusive(&mp->wait, &wait);
do {
set_current_state(TASK_UNINTERRUPTIBLE);
if (metapage_locked(mp)) {
folio_unlock(mp->folio);
io_schedule();
folio_lock(mp->folio);
}
} while (trylock_metapage(mp));
__set_current_state(TASK_RUNNING);
remove_wait_queue(&mp->wait, &wait);
}
/*
* Must have mp->folio locked
*/
static inline void lock_metapage(struct metapage *mp)
{
if (trylock_metapage(mp))
__lock_metapage(mp);
}
#define METAPOOL_MIN_PAGES 32
static struct kmem_cache *metapage_cache;
static mempool_t *metapage_mempool;
#define MPS_PER_PAGE (PAGE_SIZE >> L2PSIZE)
#if MPS_PER_PAGE > 1
struct meta_anchor {
int mp_count;
atomic_t io_count;
blk_status_t status;
struct metapage *mp[MPS_PER_PAGE];
};
static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
{
struct meta_anchor *anchor = folio->private;
if (!anchor)
return NULL;
return anchor->mp[offset >> L2PSIZE];
}
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a;
int index;
int l2mp_blocks; /* log2 blocks per metapage */
a = folio->private;
if (!a) {
a = kzalloc_obj(struct meta_anchor, GFP_NOFS);
if (!a)
return -ENOMEM;
folio_attach_private(folio, a);
kmap(&folio->page);
}
if (mp) {
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
a->mp_count++;
a->mp[index] = mp;
}
return 0;
}
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a = folio->private;
int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
int index;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
BUG_ON(a->mp[index] != mp);
a->mp[index] = NULL;
if (--a->mp_count == 0) {
kfree(a);
folio_detach_private(folio);
kunmap(&folio->page);
}
}
static inline void inc_io(struct folio *folio)
{
struct meta_anchor *anchor = folio->private;
atomic_inc(&anchor->io_count);
}
static inline void dec_io(struct folio *folio, blk_status_t status,
void (*handler)(struct folio *, blk_status_t))
{
struct meta_anchor *anchor = folio->private;
if (anchor->status == BLK_STS_OK)
anchor->status = status;
if (atomic_dec_and_test(&anchor->io_count))
handler(folio, anchor->status);
}
#ifdef CONFIG_MIGRATION
static int __metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
struct meta_anchor *src_anchor = src->private;
struct metapage *mps[MPS_PER_PAGE] = {0};
struct metapage *mp;
int i, rc;
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = src_anchor->mp[i];
if (mp && metapage_locked(mp))
return -EAGAIN;
}
rc = filemap_migrate_folio(mapping, dst, src, mode);
if (rc)
return rc;
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = src_anchor->mp[i];
if (!mp)
continue;
if (unlikely(insert_metapage(dst, mp))) {
/* If error, roll-back previosly inserted pages */
for (int j = 0 ; j < i; j++) {
if (mps[j])
remove_metapage(dst, mps[j]);
}
return -EAGAIN;
}
mps[i] = mp;
}
/* Update the metapage and remove it from src */
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = mps[i];
if (mp) {
int page_offset = mp->data - folio_address(src);
mp->data = folio_address(dst) + page_offset;
mp->folio = dst;
remove_metapage(src, mp);
}
}
return 0;
}
#endif /* CONFIG_MIGRATION */
#else
static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
{
return folio->private;
}
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
if (mp) {
folio_attach_private(folio, mp);
kmap(&folio->page);
}
return 0;
}
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
folio_detach_private(folio);
kunmap(&folio->page);
}
#define inc_io(folio) do {} while(0)
#define dec_io(folio, status, handler) handler(folio, status)
#ifdef CONFIG_MIGRATION
static int __metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
struct metapage *mp;
int page_offset;
int rc;
mp = folio_to_mp(src, 0);
if (metapage_locked(mp))
return -EAGAIN;
rc = filemap_migrate_folio(mapping, dst, src, mode);
if (rc)
return rc;
if (unlikely(insert_metapage(dst, mp)))
return -EAGAIN;
page_offset = mp->data - folio_address(src);
mp->data = folio_address(dst) + page_offset;
mp->folio = dst;
remove_metapage(src, mp);
return 0;
}
#endif /* CONFIG_MIGRATION */
#endif
static inline struct metapage *alloc_metapage(gfp_t gfp_mask)
{
struct metapage *mp = mempool_alloc(metapage_mempool, gfp_mask);
if (mp) {
mp->lid = 0;
mp->lsn = 0;
mp->data = NULL;
mp->clsn = 0;
mp->log = NULL;
init_waitqueue_head(&mp->wait);
}
return mp;
}
static inline void free_metapage(struct metapage *mp)
{
mempool_free(mp, metapage_mempool);
}
int __init metapage_init(void)
{
/*
* Allocate the metapage structures
*/
metapage_cache = kmem_cache_create("jfs_mp", sizeof(struct metapage),
0, 0, NULL);
if (metapage_cache == NULL)
return -ENOMEM;
metapage_mempool = mempool_create_slab_pool(METAPOOL_MIN_PAGES,
metapage_cache);
if (metapage_mempool == NULL) {
kmem_cache_destroy(metapage_cache);
return -ENOMEM;
}
return 0;
}
void metapage_exit(void)
{
mempool_destroy(metapage_mempool);
kmem_cache_destroy(metapage_cache);
}
static inline void drop_metapage(struct folio *folio, struct metapage *mp)
{
if (mp->count || mp->nohomeok || test_bit(META_dirty, &mp->flag) ||
test_bit(META_io, &mp->flag))
return;
remove_metapage(folio, mp);
INCREMENT(mpStat.pagefree);
free_metapage(mp);
}
/*
* Metapage address space operations
*/
static sector_t metapage_get_blocks(struct inode *inode, sector_t lblock,
int *len)
{
int rc = 0;
int xflag;
s64 xaddr;
sector_t file_blocks = (inode->i_size + inode->i_sb->s_blocksize - 1) >>
inode->i_blkbits;
if (lblock >= file_blocks)
return 0;
if (lblock + *len > file_blocks)
*len = file_blocks - lblock;
if (inode->i_ino) {
rc = xtLookup(inode, (s64)lblock, *len, &xflag, &xaddr, len, 0);
if ((rc == 0) && *len)
lblock = (sector_t)xaddr;
else
lblock = 0;
} /* else no mapping */
return lblock;
}
static void last_read_complete(struct folio *folio, blk_status_t status)
{
if (status)
printk(KERN_ERR "Read error %d at %#llx\n", status,
folio_pos(folio));
folio_end_read(folio, status == 0);
}
static void metapage_read_end_io(struct bio *bio)
{
struct folio *folio = bio->bi_private;
dec_io(folio, bio->bi_status, last_read_complete);
bio_put(bio);
}
static void remove_from_logsync(struct metapage *mp)
{
struct jfs_log *log = mp->log;
unsigned long flags;
/*
* This can race. Recheck that log hasn't been set to null, and after
* acquiring logsync lock, recheck lsn
*/
if (!log)
return;
LOGSYNC_LOCK(log, flags);
if (mp->lsn) {
mp->log = NULL;
mp->lsn = 0;
mp->clsn = 0;
log->count--;
list_del(&mp->synclist);
}
LOGSYNC_UNLOCK(log, flags);
}
static void last_write_complete(struct folio *folio, blk_status_t status)
{
struct metapage *mp;
unsigned int offset;
if (status) {
int err = blk_status_to_errno(status);
printk(KERN_ERR "metapage_write_end_io: I/O error\n");
mapping_set_error(folio->mapping, err);
}
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (mp && test_bit(META_io, &mp->flag)) {
if (mp->lsn)
remove_from_logsync(mp);
clear_bit(META_io, &mp->flag);
}
/*
* I'd like to call drop_metapage here, but I don't think it's
* safe unless I have the page locked
*/
}
folio_end_writeback(folio);
}
static void metapage_write_end_io(struct bio *bio)
{
struct folio *folio = bio->bi_private;
BUG_ON(!folio->private);
dec_io(folio, bio->bi_status, last_write_complete);
bio_put(bio);
}
static int metapage_write_folio(struct folio *folio,
struct writeback_control *wbc)
{
struct bio *bio = NULL;
int block_offset; /* block offset of mp within page */
struct inode *inode = folio->mapping->host;
int blocks_per_mp = JFS_SBI(inode->i_sb)->nbperpage;
int len;
int xlen;
struct metapage *mp;
int redirty = 0;
sector_t lblock;
int nr_underway = 0;
sector_t pblock;
sector_t next_block = 0;
sector_t page_start;
unsigned long bio_bytes = 0;
unsigned long bio_offset = 0;
int offset;
int bad_blocks = 0;
page_start = folio_pos(folio) >> inode->i_blkbits;
BUG_ON(!folio_test_locked(folio));
BUG_ON(folio_test_writeback(folio));
folio_start_writeback(folio);
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp || !test_bit(META_dirty, &mp->flag))
continue;
if (mp->nohomeok && !test_bit(META_forcewrite, &mp->flag)) {
redirty = 1;
/*
* Make sure this page isn't blocked indefinitely.
* If the journal isn't undergoing I/O, push it
*/
if (mp->log && !(mp->log->cflag & logGC_PAGEOUT))
jfs_flush_journal(mp->log, 0);
continue;
}
clear_bit(META_dirty, &mp->flag);
set_bit(META_io, &mp->flag);
block_offset = offset >> inode->i_blkbits;
lblock = page_start + block_offset;
if (bio) {
if (xlen && lblock == next_block) {
/* Contiguous, in memory & on disk */
len = min(xlen, blocks_per_mp);
xlen -= len;
bio_bytes += len << inode->i_blkbits;
continue;
}
/* Not contiguous */
bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
/*
* Increment counter before submitting i/o to keep
* count from hitting zero before we're through
*/
inc_io(folio);
if (!bio->bi_iter.bi_size)
goto dump_bio;
submit_bio(bio);
nr_underway++;
bio = NULL;
} else
inc_io(folio);
xlen = (folio_size(folio) - offset) >> inode->i_blkbits;
pblock = metapage_get_blocks(inode, lblock, &xlen);
if (!pblock) {
printk(KERN_ERR "JFS: metapage_get_blocks failed\n");
/*
* We already called inc_io(), but can't cancel it
* with dec_io() until we're done with the page
*/
bad_blocks++;
continue;
}
len = min(xlen, (int)JFS_SBI(inode->i_sb)->nbperpage);
bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_WRITE, GFP_NOFS);
bio->bi_iter.bi_sector = pblock << (inode->i_blkbits - 9);
bio->bi_end_io = metapage_write_end_io;
bio->bi_private = folio;
/* Don't call bio_add_page yet, we may add to this vec */
bio_offset = offset;
bio_bytes = len << inode->i_blkbits;
xlen -= len;
next_block = lblock + len;
}
if (bio) {
bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
if (!bio->bi_iter.bi_size)
goto dump_bio;
submit_bio(bio);
nr_underway++;
}
if (redirty)
folio_redirty_for_writepage(wbc, folio);
folio_unlock(folio);
if (bad_blocks)
goto err_out;
if (nr_underway == 0)
folio_end_writeback(folio);
return 0;
dump_bio:
print_hex_dump(KERN_ERR, "JFS: dump of bio: ", DUMP_PREFIX_ADDRESS, 16,
4, bio, sizeof(*bio), 0);
bio_put(bio);
folio_unlock(folio);
dec_io(folio, BLK_STS_OK, last_write_complete);
err_out:
while (bad_blocks--)
dec_io(folio, BLK_STS_OK, last_write_complete);
return -EIO;
}
static int metapage_writepages(struct address_space *mapping,
struct writeback_control *wbc)
{
struct blk_plug plug;
struct folio *folio = NULL;
int err;
blk_start_plug(&plug);
while ((folio = writeback_iter(mapping, wbc, folio, &err)))
err = metapage_write_folio(folio, wbc);
blk_finish_plug(&plug);
return err;
}
static int metapage_read_folio(struct file *fp, struct folio *folio)
{
struct inode *inode = folio->mapping->host;
struct bio *bio = NULL;
int block_offset;
int blocks_per_page = i_blocks_per_folio(inode, folio);
sector_t page_start; /* address of page in fs blocks */
sector_t pblock;
int xlen;
unsigned int len;
int offset;
BUG_ON(!folio_test_locked(folio));
page_start = folio_pos(folio) >> inode->i_blkbits;
block_offset = 0;
while (block_offset < blocks_per_page) {
xlen = blocks_per_page - block_offset;
pblock = metapage_get_blocks(inode, page_start + block_offset,
&xlen);
if (pblock) {
if (!folio->private)
insert_metapage(folio, NULL);
inc_io(folio);
if (bio)
submit_bio(bio);
bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_READ,
GFP_NOFS);
bio->bi_iter.bi_sector =
pblock << (inode->i_blkbits - 9);
bio->bi_end_io = metapage_read_end_io;
bio->bi_private = folio;
len = xlen << inode->i_blkbits;
offset = block_offset << inode->i_blkbits;
bio_add_folio_nofail(bio, folio, len, offset);
block_offset += xlen;
} else
block_offset++;
}
if (bio)
submit_bio(bio);
else
folio_unlock(folio);
return 0;
}
static bool metapage_release_folio(struct folio *folio, gfp_t gfp_mask)
{
struct metapage *mp;
bool ret = true;
int offset;
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp)
continue;
jfs_info("metapage_release_folio: mp = 0x%p", mp);
if (mp->count || mp->nohomeok ||
test_bit(META_dirty, &mp->flag)) {
jfs_info("count = %ld, nohomeok = %d", mp->count,
mp->nohomeok);
ret = false;
continue;
}
if (mp->lsn)
remove_from_logsync(mp);
remove_metapage(folio, mp);
INCREMENT(mpStat.pagefree);
free_metapage(mp);
}
return ret;
}
#ifdef CONFIG_MIGRATION
/*
* metapage_migrate_folio - Migration function for JFS metapages
*/
static int metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
int expected_count;
if (!src->private)
return filemap_migrate_folio(mapping, dst, src, mode);
/* Check whether page does not have extra refs before we do more work */
expected_count = folio_expected_ref_count(src) + 1;
if (folio_ref_count(src) != expected_count)
return -EAGAIN;
return __metapage_migrate_folio(mapping, dst, src, mode);
}
#else
#define metapage_migrate_folio NULL
#endif /* CONFIG_MIGRATION */
static void metapage_invalidate_folio(struct folio *folio, size_t offset,
size_t length)
{
BUG_ON(offset || length < folio_size(folio));
BUG_ON(folio_test_writeback(folio));
metapage_release_folio(folio, 0);
}
const struct address_space_operations jfs_metapage_aops = {
.read_folio = metapage_read_folio,
.writepages = metapage_writepages,
.release_folio = metapage_release_folio,
.invalidate_folio = metapage_invalidate_folio,
.dirty_folio = filemap_dirty_folio,
.migrate_folio = metapage_migrate_folio,
};
struct metapage *__get_metapage(struct inode *inode, unsigned long lblock,
unsigned int size, int absolute,
unsigned long new)
{
int l2BlocksPerPage;
int l2bsize;
struct address_space *mapping;
struct metapage *mp = NULL;
struct folio *folio;
unsigned long page_index;
unsigned long page_offset;
jfs_info("__get_metapage: ino = %ld, lblock = 0x%lx, abs=%d",
inode->i_ino, lblock, absolute);
l2bsize = inode->i_blkbits;
l2BlocksPerPage = PAGE_SHIFT - l2bsize;
page_index = lblock >> l2BlocksPerPage;
page_offset = (lblock - (page_index << l2BlocksPerPage)) << l2bsize;
if ((page_offset + size) > PAGE_SIZE) {
jfs_err("MetaData crosses page boundary!!");
jfs_err("lblock = %lx, size = %d", lblock, size);
dump_stack();
return NULL;
}
if (absolute)
mapping = JFS_SBI(inode->i_sb)->direct_inode->i_mapping;
else {
/*
* If an nfs client tries to read an inode that is larger
* than any existing inodes, we may try to read past the
* end of the inode map
*/
if ((lblock << inode->i_blkbits) >= inode->i_size)
return NULL;
mapping = inode->i_mapping;
}
if (new && (PSIZE == PAGE_SIZE)) {
folio = filemap_grab_folio(mapping, page_index);
if (IS_ERR(folio)) {
jfs_err("filemap_grab_folio failed!");
return NULL;
}
folio_mark_uptodate(folio);
} else {
folio = read_mapping_folio(mapping, page_index, NULL);
if (IS_ERR(folio)) {
jfs_err("read_mapping_page failed!");
return NULL;
}
folio_lock(folio);
}
mp = folio_to_mp(folio, page_offset);
if (mp) {
if (mp->logical_size != size) {
jfs_error(inode->i_sb,
"get_mp->logical_size != size\n");
jfs_err("logical_size = %d, size = %d",
mp->logical_size, size);
dump_stack();
goto unlock;
}
mp->count++;
lock_metapage(mp);
if (test_bit(META_discard, &mp->flag)) {
if (!new) {
jfs_error(inode->i_sb,
"using a discarded metapage\n");
discard_metapage(mp);
goto unlock;
}
clear_bit(META_discard, &mp->flag);
}
} else {
INCREMENT(mpStat.pagealloc);
mp = alloc_metapage(GFP_NOFS);
if (!mp)
goto unlock;
mp->folio = folio;
mp->sb = inode->i_sb;
mp->flag = 0;
mp->xflag = COMMIT_PAGE;
mp->count = 1;
mp->nohomeok = 0;
mp->logical_size = size;
mp->data = folio_address(folio) + page_offset;
mp->index = lblock;
if (unlikely(insert_metapage(folio, mp))) {
free_metapage(mp);
goto unlock;
}
lock_metapage(mp);
}
if (new) {
jfs_info("zeroing mp = 0x%p", mp);
memset(mp->data, 0, PSIZE);
}
folio_unlock(folio);
jfs_info("__get_metapage: returning = 0x%p data = 0x%p", mp, mp->data);
return mp;
unlock:
folio_unlock(folio);
return NULL;
}
void grab_metapage(struct metapage * mp)
{
jfs_info("grab_metapage: mp = 0x%p", mp);
folio_get(mp->folio);
folio_lock(mp->folio);
mp->count++;
lock_metapage(mp);
folio_unlock(mp->folio);
}
static int metapage_write_one(struct folio *folio)
{
struct address_space *mapping = folio->mapping;
struct writeback_control wbc = {
.sync_mode = WB_SYNC_ALL,
.nr_to_write = folio_nr_pages(folio),
};
int ret = 0;
BUG_ON(!folio_test_locked(folio));
folio_wait_writeback(folio);
if (folio_clear_dirty_for_io(folio)) {
folio_get(folio);
ret = metapage_write_folio(folio, &wbc);
if (ret == 0)
folio_wait_writeback(folio);
folio_put(folio);
} else {
folio_unlock(folio);
}
if (!ret)
ret = filemap_check_errors(mapping);
return ret;
}
void force_metapage(struct metapage *mp)
{
struct folio *folio = mp->folio;
jfs_info("force_metapage: mp = 0x%p", mp);
set_bit(META_forcewrite, &mp->flag);
clear_bit(META_sync, &mp->flag);
folio_get(folio);
folio_lock(folio);
folio_mark_dirty(folio);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
clear_bit(META_forcewrite, &mp->flag);
folio_put(folio);
}
void hold_metapage(struct metapage *mp)
{
folio_lock(mp->folio);
}
void put_metapage(struct metapage *mp)
{
if (mp->count || mp->nohomeok) {
/* Someone else will release this */
folio_unlock(mp->folio);
return;
}
folio_get(mp->folio);
mp->count++;
lock_metapage(mp);
folio_unlock(mp->folio);
release_metapage(mp);
}
void release_metapage(struct metapage * mp)
{
struct folio *folio = mp->folio;
jfs_info("release_metapage: mp = 0x%p, flag = 0x%lx", mp, mp->flag);
folio_lock(folio);
unlock_metapage(mp);
assert(mp->count);
if (--mp->count || mp->nohomeok) {
folio_unlock(folio);
folio_put(folio);
return;
}
if (test_bit(META_dirty, &mp->flag)) {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
} else if (mp->lsn) /* discard_metapage doesn't remove it */
remove_from_logsync(mp);
/* Try to keep metapages from using up too much memory */
drop_metapage(folio, mp);
folio_unlock(folio);
folio_put(folio);
}
void __invalidate_metapages(struct inode *ip, s64 addr, int len)
{
sector_t lblock;
int l2BlocksPerPage = PAGE_SHIFT - ip->i_blkbits;
int BlocksPerPage = 1 << l2BlocksPerPage;
/* All callers are interested in block device's mapping */
struct address_space *mapping =
JFS_SBI(ip->i_sb)->direct_inode->i_mapping;
struct metapage *mp;
unsigned int offset;
/*
* Mark metapages to discard. They will eventually be
* released, but should not be written.
*/
for (lblock = addr & ~(BlocksPerPage - 1); lblock < addr + len;
lblock += BlocksPerPage) {
struct folio *folio = filemap_lock_folio(mapping,
lblock >> l2BlocksPerPage);
if (IS_ERR(folio))
continue;
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp)
continue;
if (mp->index < addr)
continue;
if (mp->index >= addr + len)
break;
clear_bit(META_dirty, &mp->flag);
set_bit(META_discard, &mp->flag);
if (mp->lsn)
remove_from_logsync(mp);
}
folio_unlock(folio);
folio_put(folio);
}
}
#ifdef CONFIG_JFS_STATISTICS
int jfs_mpstat_proc_show(struct seq_file *m, void *v)
{
seq_printf(m,
"JFS Metapage statistics\n"
"=======================\n"
"page allocations = %d\n"
"page frees = %d\n"
"lock waits = %d\n",
mpStat.pagealloc,
mpStat.pagefree,
mpStat.lockwait);
return 0;
}
#endif
] |
| 342/6 | 2026/03/16 16:32 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 11312 output: 23 thoughts: 1595 Thoughts: **Analysis of `folio->mapping` Dereferences in `fs/jfs/jfs_metapage.c`** Okay, I've gone through the file `fs/jfs/jfs_metapage.c` and found a number of instances where `folio->mapping` is accessed. I've broken them down as requested. 1. **`insert_metapage` and `remove_metapage`**: In both functions, the code dereferences `folio->mapping` to access `host->i_blkbits`. The critical point is that no null check exists *before* this dereference. This could lead to a crash if the folio's mapping is uninitialized, particularly during certain error scenarios or truncation. 2. **`last_write_complete`**: Here, `folio->mapping` is passed directly to `mapping_set_error`. Once again, there's no null check. Now, given that this function is an I/O completion handler, I'm thinking that if the folio is removed from the page cache *while I/O is still in progress*, `folio->mapping` could very well be NULL at this point. This seems like a likely candidate for a bug. 3. **`metapage_write_folio` and `metapage_read_folio`**: In these functions, the code obtains the inode from `folio->mapping->host`. It's part of the standard `address_space_operations`, so ordinarily I wouldn't expect `folio->mapping` to be NULL. However, I need to keep in mind the potential for truncation and the possibility that this might be used in a callback that could see the folio in a transient state. No NULL check exists here either. 4. **`metapage_write_one`**: The code assigns `folio->mapping` to `mapping`, and then the variable `mapping` is passed to `filemap_check_errors`. Again, no null check prior to `filemap_check_errors` calling on `mapping`. Now, the important question is whether `folio->mapping` *can* be NULL in any of these contexts. I have a hunch that, while generally the page cache guarantees a valid mapping, situations during truncation, eviction, or I/O completion after removal from the page cache *could* result in a NULL pointer. Specifically, `last_write_complete` seems like the highest-risk area. I should probably go back and search the commit history to see if there have been any relevant fixes or known issues involving this. The fact that the question asks if there's *another* place where it's dereferenced without a null check suggests there may already be a known instance. |
| 343/6 | 2026/03/16 16:32 | tool | codesearch-dir-index |
0mArgs:map[Dir:fs/jfs/] Results: map[Files:[acl.c file.c inode.c ioctl.c jfs_acl.h jfs_btree.h jfs_debug.c jfs_debug.h jfs_dinode.h jfs_discard.c jfs_discard.h jfs_dmap.c jfs_dmap.h jfs_dtree.c jfs_dtree.h jfs_extent.c jfs_extent.h jfs_filsys.h jfs_imap.c jfs_imap.h jfs_incore.h jfs_inode.c jfs_inode.h jfs_lock.h jfs_logmgr.c jfs_logmgr.h jfs_metapage.c jfs_metapage.h jfs_mount.c jfs_superblock.h jfs_txnmgr.c jfs_txnmgr.h jfs_types.h jfs_umount.c jfs_unicode.c jfs_unicode.h jfs_xattr.h jfs_xtree.c jfs_xtree.h namei.c resize.c super.c symlink.c xattr.c] Subdirs:<nil>] |
| 344/6 | 2026/03/16 16:32 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 13926 output: 41 thoughts: 993 Thoughts: **Analyzing Folio Mapping and Potential Null Dereference** Okay, so I'm trying to figure out if there's a potential null pointer dereference related to `folio->mapping`, but I can't directly use `git log` or `grep` to quickly search. I'm going to have to manually inspect the code. First, I ruled out a direct `git log` approach, so I need to methodically check for usages of `folio->mapping`. I initially missed a few instances, so I'm going back to basics. I had identified the usage of `folio->mapping` in `i_blocks_per_folio` macro, but now I think to examine the places where `insert_metapage` and `remove_metapage` are called. I'm seeing that these functions, in turn, use `folio->mapping` to determine the calculation of `l2mp_blocks`. I see that `insert_metapage` takes it from: ```c l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits; ``` And `remove_metapage` takes it from: ```c int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits; ``` I've also noted the call sites of `insert_metapage` and `remove_metapage`, which are called from quite a few places including `__metapage_migrate_folio`, `__get_metapage`, `metapage_read_folio`, `drop_metapage` and `metapage_release_folio`. Critically, in `__metapage_migrate_folio`, the `mapping` is passed as an argument, so this should be safe. However, in that code, `filemap_migrate_folio` uses the `mapping` argument and `insert_metapage` and `remove_metapage` uses `folio->mapping`. For `dst`, it appears it should be set by the migration code. This made me realize I need to understand if `folio->mapping` can ever be NULL. I know that `metapage_release_folio` is called with the `.release_folio` operation, which might be called during folio eviction. I'm recalling that `release_folio` is generally called when the kernel wants to free the folio's private data before freeing the folio itself. If the folio is in the page cache, then it should have a valid mapping. However, I recall a recent JFS fix, so let's revisit `metapage_write_folio`, which also uses `folio->mapping`. Here: ```c struct inode *inode = folio->mapping->host; ``` If `folio->mapping` is NULL, this will cause a crash. Now I need to systematically search for *all* places where `folio->mapping` is used, since there is no `grep` or `git` to help, I will have to use manual code scanning or potentially something like `codesearch-find-references` (if possible). I can try to find references to `struct folio::mapping`. |
| 345/6 | 2026/03/16 16:32 | tool | codesearch-find-references |
0mArgs:map[Name:folio::mapping SourceTreePrefix:fs/jfs/jfs_metapage.c] Results: map[References:<nil> TruncatedOutput:false] |
| 346/6 | 2026/03/16 16:32 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 15664 output: 28 thoughts: 690 Thoughts: **Debugging a Potential NULL Dereference**
Okay, that initial approach didn't pan out; I need to change gears. Let's start with a focused string search for "mapping" within the file. Ah, right, I've already scanned the file. My mistake.
I need to focus on potential `folio->mapping` dereferences and see if they can be NULL. I'm looking at `metapage_read_folio` and immediately see it:
```c
static int metapage_read_folio(struct file *fp, struct folio *folio)
{
struct inode *inode = folio->mapping->host;
```
Interesting, it dereferences `folio->mapping` there.
Now, wait, here's another spot: `last_write_complete`:
```c
if (status) {
int err = blk_status_to_errno(status);
printk(KERN_ERR "metapage_write_end_io: I/O error\n");
mapping_set_error(folio->mapping, err);
}
```
If `status` indicates an error, `mapping_set_error(folio->mapping, err)` is called. And, `mapping_set_error` itself is:
```c
static inline void mapping_set_error(struct address_space *mapping, int error)
{
if (unlikely(error)) {
if (error == -ENOSPC)
set_bit(AS_ENOSPC, &mapping->flags);
else
set_bit(AS_EIO, &mapping->flags);
}
}
```
So it also dereferences `mapping`. Can `folio->mapping` *actually* be NULL here? Let's think through this. `last_write_complete` is called from `metapage_write_end_io` which in turn is a `bio->bi_end_io` handler. The folio was under writeback. Folios undergoing writeback should have their mapping, usually, until the writeback is complete.
But wait, I recall seeing something similar in another filesystem or an older version of JFS... yes, here it is in `insert_metapage`.
```c
#if MPS_PER_PAGE > 1
// ...
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
// ...
if (mp) {
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
```
Ah-ha! If `MPS_PER_PAGE == 1`, the definition of `insert_metapage` changes:
```c
#else
// ...
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
if (mp) {
folio_attach_private(folio, mp);
kmap(&folio->page);
}
return 0;
}
```
Here, it *doesn't* use `folio->mapping`. Therefore, the potential NULL dereference is *only* when `MPS_PER_PAGE > 1`. Now, what is `MPS_PER_PAGE`? That is `PAGE_SIZE >> L2PSIZE`. Time to look at `jfs_metapage.h` to check `L2PSIZE`. I need to figure out under what conditions `MPS_PER_PAGE > 1`.
|
| 347/6 | 2026/03/16 16:32 | tool | read-file |
0mArgs:map[File:fs/jfs/jfs_metapage.h] Results: map[Contents:/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) International Business Machines Corp., 2000-2002
* Portions Copyright (C) Christoph Hellwig, 2001-2002
*/
#ifndef _H_JFS_METAPAGE
#define _H_JFS_METAPAGE
#include <linux/pagemap.h>
struct metapage {
/* Common logsyncblk prefix (see jfs_logmgr.h) */
u16 xflag;
u16 unused;
lid_t lid;
int lsn;
struct list_head synclist;
/* End of logsyncblk prefix */
unsigned long flag; /* See Below */
unsigned long count; /* Reference count */
void *data; /* Data pointer */
sector_t index; /* block address of page */
wait_queue_head_t wait;
/* implementation */
struct folio *folio;
struct super_block *sb;
unsigned int logical_size;
/* Journal management */
int clsn;
int nohomeok;
struct jfs_log *log;
};
/* metapage flag */
#define META_locked 0
#define META_dirty 2
#define META_sync 3
#define META_discard 4
#define META_forcewrite 5
#define META_io 6
#define mark_metapage_dirty(mp) set_bit(META_dirty, &(mp)->flag)
/* function prototypes */
extern int metapage_init(void);
extern void metapage_exit(void);
extern struct metapage *__get_metapage(struct inode *inode,
unsigned long lblock, unsigned int size,
int absolute, unsigned long new);
#define read_metapage(inode, lblock, size, absolute)\
__get_metapage(inode, lblock, size, absolute, false)
#define get_metapage(inode, lblock, size, absolute)\
__get_metapage(inode, lblock, size, absolute, true)
extern void release_metapage(struct metapage *);
extern void grab_metapage(struct metapage *);
extern void force_metapage(struct metapage *);
/*
* hold_metapage and put_metapage are used in conjunction. The page lock
* is not dropped between the two, so no other threads can get or release
* the metapage
*/
extern void hold_metapage(struct metapage *);
extern void put_metapage(struct metapage *);
static inline void write_metapage(struct metapage *mp)
{
set_bit(META_dirty, &mp->flag);
release_metapage(mp);
}
static inline void flush_metapage(struct metapage *mp)
{
set_bit(META_sync, &mp->flag);
write_metapage(mp);
}
static inline void discard_metapage(struct metapage *mp)
{
clear_bit(META_dirty, &mp->flag);
set_bit(META_discard, &mp->flag);
release_metapage(mp);
}
static inline void metapage_nohomeok(struct metapage *mp)
{
struct folio *folio = mp->folio;
folio_lock(folio);
if (!mp->nohomeok++) {
mark_metapage_dirty(mp);
folio_get(folio);
folio_wait_writeback(folio);
}
folio_unlock(folio);
}
/*
* This serializes access to mp->lsn when metapages are added to logsynclist
* without setting nohomeok. i.e. updating imap & dmap
*/
static inline void metapage_wait_for_io(struct metapage *mp)
{
if (test_bit(META_io, &mp->flag))
folio_wait_writeback(mp->folio);
}
/*
* This is called when already holding the metapage
*/
static inline void _metapage_homeok(struct metapage *mp)
{
if (!--mp->nohomeok)
folio_put(mp->folio);
}
static inline void metapage_homeok(struct metapage *mp)
{
hold_metapage(mp);
_metapage_homeok(mp);
put_metapage(mp);
}
extern const struct address_space_operations jfs_metapage_aops;
/*
* This routines invalidate all pages for an extent.
*/
extern void __invalidate_metapages(struct inode *, s64, int);
#define invalidate_pxd_metapages(ip, pxd) \
__invalidate_metapages((ip), addressPXD(&(pxd)), lengthPXD(&(pxd)))
#define invalidate_dxd_metapages(ip, dxd) \
__invalidate_metapages((ip), addressDXD(&(dxd)), lengthDXD(&(dxd)))
#define invalidate_xad_metapages(ip, xad) \
__invalidate_metapages((ip), addressXAD(&(xad)), lengthXAD(&(xad)))
#endif /* _H_JFS_METAPAGE */
] |
| 348/6 | 2026/03/16 16:32 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 18442 output: 27 thoughts: 44 Thoughts: Okay, here's my take on those thoughts, tailored for an expert audience: **Investigating the L2PSIZE Macro** Hmm, I'm just looking through `jfs_metapage.h`, and I can't seem to find the definition for `L2PSIZE`. That's a bit odd. I was expecting it to be declared here, given the context. Let me make a mental note: it's not present where I anticipated. Alright, since it's not in the metapage header, it's reasonable to assume it's located in the broader file system definitions. My next instinct is to check `jfs_filsys.h`. That's where I'd expect to find global constants and size definitions related to the JFS file system as a whole. Time to dive into that one and see if it's defined there. |
| 349/6 | 2026/03/16 16:33 | tool | read-file |
0mArgs:map[File:fs/jfs/jfs_filsys.h] Results: map[Contents:/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * Copyright (C) International Business Machines Corp., 2000-2003 */ #ifndef _H_JFS_FILSYS #define _H_JFS_FILSYS /* * jfs_filsys.h * * file system (implementation-dependent) constants * * refer to <limits.h> for system wide implementation-dependent constants */ /* * file system option (superblock flag) */ /* directory option */ #define JFS_UNICODE 0x00000001 /* unicode name */ /* mount time flags for error handling */ #define JFS_ERR_REMOUNT_RO 0x00000002 /* remount read-only */ #define JFS_ERR_CONTINUE 0x00000004 /* continue */ #define JFS_ERR_PANIC 0x00000008 /* panic */ #define JFS_ERR_MASK (JFS_ERR_REMOUNT_RO|JFS_ERR_CONTINUE|JFS_ERR_PANIC) /* Quota support */ #define JFS_USRQUOTA 0x00000010 #define JFS_GRPQUOTA 0x00000020 /* mount time flag to disable journaling to disk */ #define JFS_NOINTEGRITY 0x00000040 /* mount time flag to enable TRIM to ssd disks */ #define JFS_DISCARD 0x00000080 /* commit option */ #define JFS_COMMIT 0x00000f00 /* commit option mask */ #define JFS_GROUPCOMMIT 0x00000100 /* group (of 1) commit */ #define JFS_LAZYCOMMIT 0x00000200 /* lazy commit */ #define JFS_TMPFS 0x00000400 /* temporary file system - * do not log/commit: * Never implemented */ /* log logical volume option */ #define JFS_INLINELOG 0x00000800 /* inline log within file system */ #define JFS_INLINEMOVE 0x00001000 /* inline log being moved */ /* Secondary aggregate inode table */ #define JFS_BAD_SAIT 0x00010000 /* current secondary ait is bad */ /* sparse regular file support */ #define JFS_SPARSE 0x00020000 /* sparse regular file */ /* DASD Limits F226941 */ #define JFS_DASD_ENABLED 0x00040000 /* DASD limits enabled */ #define JFS_DASD_PRIME 0x00080000 /* Prime DASD usage on boot */ /* big endian flag */ #define JFS_SWAP_BYTES 0x00100000 /* running on big endian computer */ /* Directory index */ #define JFS_DIR_INDEX 0x00200000 /* Persistent index for */ /* platform options */ #define JFS_LINUX 0x10000000 /* Linux support */ #define JFS_DFS 0x20000000 /* DCE DFS LFS support */ /* Never implemented */ #define JFS_OS2 0x40000000 /* OS/2 support */ /* case-insensitive name/directory support */ #define JFS_AIX 0x80000000 /* AIX support */ /* * buffer cache configuration */ /* page size */ #ifdef PSIZE #undef PSIZE #endif #define PSIZE 4096 /* page size (in byte) */ #define L2PSIZE 12 /* log2(PSIZE) */ #define POFFSET 4095 /* offset within page */ /* buffer page size */ #define BPSIZE PSIZE /* * fs fundamental size * * PSIZE >= file system block size >= PBSIZE >= DISIZE */ #define PBSIZE 512 /* physical block size (in byte) */ #define L2PBSIZE 9 /* log2(PBSIZE) */ #define DISIZE 512 /* on-disk inode size (in byte) */ #define L2DISIZE 9 /* log2(DISIZE) */ #define IDATASIZE 256 /* inode inline data size */ #define IXATTRSIZE 128 /* inode inline extended attribute size */ #define XTPAGE_SIZE 4096 #define log2_PAGESIZE 12 #define IAG_SIZE 4096 #define IAG_EXTENT_SIZE 4096 #define INOSPERIAG 4096 /* number of disk inodes per iag */ #define L2INOSPERIAG 12 /* l2 number of disk inodes per iag */ #define INOSPEREXT 32 /* number of disk inode per extent */ #define L2INOSPEREXT 5 /* l2 number of disk inode per extent */ #define IXSIZE (DISIZE * INOSPEREXT) /* inode extent size */ #define INOSPERPAGE 8 /* number of disk inodes per 4K page */ #define L2INOSPERPAGE 3 /* log2(INOSPERPAGE) */ #define IAGFREELIST_LWM 64 #define INODE_EXTENT_SIZE IXSIZE /* inode extent size */ #define NUM_INODE_PER_EXTENT INOSPEREXT #define NUM_INODE_PER_IAG INOSPERIAG #define MINBLOCKSIZE 512 #define L2MINBLOCKSIZE 9 #define MAXBLOCKSIZE 4096 #define L2MAXBLOCKSIZE 12 #define MAXFILESIZE ((s64)1 << 52) #define JFS_LINK_MAX 0xffffffff /* Minimum number of bytes supported for a JFS partition */ #define MINJFS (0x1000000) #define MINJFSTEXT "16" /* * file system block size -> physical block size */ #define LBOFFSET(x) ((x) & (PBSIZE - 1)) #define LBNUMBER(x) ((x) >> L2PBSIZE) #define LBLK2PBLK(sb,b) ((b) << (sb->s_blocksize_bits - L2PBSIZE)) #define PBLK2LBLK(sb,b) ((b) >> (sb->s_blocksize_bits - L2PBSIZE)) /* size in byte -> last page number */ #define SIZE2PN(size) ( ((s64)((size) - 1)) >> (L2PSIZE) ) /* size in byte -> last file system block number */ #define SIZE2BN(size, l2bsize) ( ((s64)((size) - 1)) >> (l2bsize) ) /* * fixed physical block address (physical block size = 512 byte) * * NOTE: since we can't guarantee a physical block size of 512 bytes the use of * these macros should be removed and the byte offset macros used instead. */ #define SUPER1_B 64 /* primary superblock */ #define AIMAP_B (SUPER1_B + 8) /* 1st extent of aggregate inode map */ #define AITBL_B (AIMAP_B + 16) /* * 1st extent of aggregate inode table */ #define SUPER2_B (AITBL_B + 32) /* 2ndary superblock pbn */ #define BMAP_B (SUPER2_B + 8) /* block allocation map */ /* * SIZE_OF_SUPER defines the total amount of space reserved on disk for the * superblock. This is not the same as the superblock structure, since all of * this space is not currently being used. */ #define SIZE_OF_SUPER PSIZE /* * SIZE_OF_AG_TABLE defines the amount of space reserved to hold the AG table */ #define SIZE_OF_AG_TABLE PSIZE /* * SIZE_OF_MAP_PAGE defines the amount of disk space reserved for each page of * the inode allocation map (to hold iag) */ #define SIZE_OF_MAP_PAGE PSIZE /* * fixed byte offset address */ #define SUPER1_OFF 0x8000 /* primary superblock */ #define AIMAP_OFF (SUPER1_OFF + SIZE_OF_SUPER) /* * Control page of aggregate inode map * followed by 1st extent of map */ #define AITBL_OFF (AIMAP_OFF + (SIZE_OF_MAP_PAGE << 1)) /* * 1st extent of aggregate inode table */ #define SUPER2_OFF (AITBL_OFF + INODE_EXTENT_SIZE) /* * secondary superblock */ #define BMAP_OFF (SUPER2_OFF + SIZE_OF_SUPER) /* * block allocation map */ /* * The following macro is used to indicate the number of reserved disk blocks at * the front of an aggregate, in terms of physical blocks. This value is * currently defined to be 32K. This turns out to be the same as the primary * superblock's address, since it directly follows the reserved blocks. */ #define AGGR_RSVD_BLOCKS SUPER1_B /* * The following macro is used to indicate the number of reserved bytes at the * front of an aggregate. This value is currently defined to be 32K. This * turns out to be the same as the primary superblock's byte offset, since it * directly follows the reserved blocks. */ #define AGGR_RSVD_BYTES SUPER1_OFF /* * The following macro defines the byte offset for the first inode extent in * the aggregate inode table. This allows us to find the self inode to find the * rest of the table. Currently this value is 44K. */ #define AGGR_INODE_TABLE_START AITBL_OFF /* * fixed reserved inode number */ /* aggregate inode */ #define AGGR_RESERVED_I 0 /* aggregate inode (reserved) */ #define AGGREGATE_I 1 /* aggregate inode map inode */ #define BMAP_I 2 /* aggregate block allocation map inode */ #define LOG_I 3 /* aggregate inline log inode */ #define BADBLOCK_I 4 /* aggregate bad block inode */ #define FILESYSTEM_I 16 /* 1st/only fileset inode in ait: * fileset inode map inode */ /* per fileset inode */ #define FILESET_RSVD_I 0 /* fileset inode (reserved) */ #define FILESET_EXT_I 1 /* fileset inode extension */ #define ROOT_I 2 /* fileset root inode */ #define ACL_I 3 /* fileset ACL inode */ #define FILESET_OBJECT_I 4 /* the first fileset inode available for a file * or directory or link... */ #define FIRST_FILESET_INO 16 /* the first aggregate inode which describes * an inode. (To fsck this is also the first * inode in part 2 of the agg inode table.) */ /* * directory configuration */ #define JFS_NAME_MAX 255 #define JFS_PATH_MAX BPSIZE /* * file system state (superblock state) */ #define FM_CLEAN 0x00000000 /* file system is unmounted and clean */ #define FM_MOUNT 0x00000001 /* file system is mounted cleanly */ #define FM_DIRTY 0x00000002 /* file system was not unmounted and clean * when mounted or * commit failure occurred while being mounted: * fsck() must be run to repair */ #define FM_LOGREDO 0x00000004 /* log based recovery (logredo()) failed: * fsck() must be run to repair */ #define FM_EXTENDFS 0x00000008 /* file system extendfs() in progress */ #define FM_STATE_MAX 0x0000000f /* max value of s_state */ #endif /* _H_JFS_FILSYS */ ] |
| 350/6 | 2026/03/16 16:33 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 21847 output: 41 thoughts: 855 Thoughts: **Analyzing `jfs_metapage.c` for Potential NULL Pointer Dereferences** Okay, so I'm diving into this `jfs_metapage.c` code, specifically looking for instances where `folio->mapping` is dereferenced without a preceding null check. The goal is to identify potential vulnerabilities. The starting point is to understand the page size setup. `PSIZE` is 4096 and `L2PSIZE` is 12, so the base `PAGE_SIZE` is, as expected, 4096 bytes on many architectures like x86, implying `MPS_PER_PAGE` is 1. However, if `PAGE_SIZE` is larger (e.g., 16K or 64K), `MPS_PER_PAGE` becomes greater than 1, meaning the code under `#if MPS_PER_PAGE > 1` handles large page architectures. My initial scan identified a few suspicious locations: `insert_metapage` (line 104), `remove_metapage` (line 118), `last_write_complete` (line 330), `metapage_write_folio` (line 355), `metapage_read_folio` (line 466), and `metapage_write_one` (line 625). Let's zoom in on `metapage_write_one`: it *looks* okay, because while it dereferences `folio->mapping` when assigning it to the local variable `mapping`, it subsequently uses `mapping` in `filemap_check_errors(mapping)`. But if `mapping` *is* NULL (because `folio->mapping` is NULL), then the call `filemap_check_errors(NULL)` will crash the program. So that's one definite potential problem. Next, I glanced at `__get_metapage`. While it does use `mapping`, that's a local variable set from `inode->i_mapping`, so it's not a direct dereference of `folio->mapping`. Now, reviewing `metapage_write_folio`, I see it uses `folio->mapping` to access the `host` member: `struct inode *inode = folio->mapping->host;` This is a direct dereference. That's another potential issue. Okay, time for a more methodical scan, rechecking every instance of `->mapping`. Now, I'm doing a more careful search, and here's what I've found: * Line 104: `folio->mapping->host->i_blkbits` - Potential dereference. * Line 118: `folio->mapping->host->i_blkbits` - Potential dereference. * Line 330: `mapping_set_error(folio->mapping, err)` - Potential dereference. * Line 355: `folio->mapping->host` - Potential dereference. * Line 466: `folio->mapping->host` - Potential dereference. * Line 625: `folio->mapping` (assigned to `mapping`) - Potential dereference. However, in `metapage_migrate_folio`, while there is the use of `mapping` it's not `folio->mapping`. A crucial check, is if the macro `folio_mapping(folio)` expands to the use of `folio->mapping`. Now, I'll search for `folio_mapping`. |
| 351/6 | 2026/03/16 16:33 | tool | codesearch-find-references |
0mArgs:map[Name:folio_mapping SourceTreePrefix:fs/jfs/jfs_metapage.c] Results: map[References:<nil> TruncatedOutput:false] |
| 352/6 | 2026/03/16 16:33 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 23535 output: 39 thoughts: 4076 Thoughts: **Analysis of `folio->mapping` Usage in `fs/jfs/jfs_metapage.c`** Alright, let's break this down. My primary concern is pinpointing where `folio->mapping` is dereferenced without a preceding null check, leading to potential crashes. I've thoroughly combed through the code, focusing on the instances where `folio->mapping` is accessed directly. My initial thought was to use `page->mapping`, but the code uses `folio`. I searched for `->mapping` throughout `jfs_metapage.c`. I've identified several locations where a crash is possible if `folio->mapping` is NULL. Primarily, these are: 1. **`insert_metapage`**: Specifically, within the `MPS_PER_PAGE > 1` block of code, the `folio->mapping->host->i_blkbits` is accessed without a NULL check on `folio->mapping` when `mp` is NOT NULL. 2. **`remove_metapage`**: Similar to `insert_metapage`, this occurs only if `MPS_PER_PAGE > 1`. 3. **`last_write_complete`**: `folio->mapping` is used in a call to `mapping_set_error()`. 4. **`metapage_write_folio`**: The inode is retrieved via `folio->mapping->host`. 5. **`metapage_read_folio`**: Similar to `metapage_write_folio`, the inode is retrieved via `folio->mapping->host`. 6. **`metapage_write_one`**: `folio->mapping` is assigned to a local variable `mapping` and then used in a call to `filemap_check_errors()`. I also checked other functions that use `folio` and `mapping`, such as: * `__metapage_migrate_folio` * `metapage_writepages` * `metapage_read_folio` * `force_metapage` * `release_metapage` * `__get_metapage` The code calls `insert_metapage` from `__get_metapage` and `metapage_read_folio`, and `metapage_read_folio` calls `insert_metapage` with a NULL `mp` which avoids the dereference of `folio->mapping`. Now, considering the context of how these folios are obtained (often from the page cache, using functions like `filemap_grab_folio`, or via `read_mapping_folio`), the scenarios where `folio->mapping` *should* be NULL are rare. Based on the code analysis, a direct dereference of `folio->mapping` is possible without a NULL check in several locations, especially if folios are somehow created or used in a manner inconsistent with the expected file system operation. I'd need further context to decide if it is a real vulnerability. Based on my experience I would suspect that one of these places, maybe `insert_metapage` or `remove_metapage`, has a bug. |
| 353/6 | 2026/03/16 16:33 | tool | codesearch-find-references |
0mError:requested entity does not exist Args: map[Name:mapping SourceTreePrefix:fs/jfs/jfs_metapage.c] Results: map[References:<nil> TruncatedOutput:false] |
| 354/6 | 2026/03/16 16:33 | llm | codeexpert |
6mModel:gemini-3-flash-preview Tokens: input: 28366 output: 645 thoughts: 62912 |
| 355/4 | 2026/03/16 16:40 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 73793 output: 176 thoughts: 387 Thoughts: **Fixing a Potential Crash in `last_write_complete`** Okay, so here's the situation with `last_write_complete`. This function, crucial as it is, is called after a write operation completes, specifically as the bio end_io callback from `metapage_write_end_io`. The goal here is to handle completion, clear flags, and generally tidy up after an I/O operation. But, there's a nasty little gotcha I need to address. The core of the problem is `folio->mapping`. If the folio gets truncated *during* the writeback process, `folio->mapping` can become NULL. When the writeback finishes and `last_write_complete` is invoked, a potential I/O error will hit `mapping_set_error(folio->mapping, err)`. That function is called, and boom, we crash. I can see the root cause here. This function isn't safe against folio truncation. I could lock the page to protect it, but the comment in the code notes that it's probably not the best approach. The fix is fairly straightforward: before attempting to use `folio->mapping`, I need to check if it's actually valid. If it's NULL, I'll need to handle the error in a different way or simply not set the error on the mapping. That check is going to have to be added to `last_write_complete`, stat! |
| 356/4 | 2026/03/16 16:40 | tool | codeeditor |
0mArgs:map[CurrentCode: if (status) {
int err = blk_status_to_errno(status);
printk(KERN_ERR "metapage_write_end_io: I/O error\n");
mapping_set_error(folio->mapping, err);
} NewCode: if (status) {
int err = blk_status_to_errno(status);
printk(KERN_ERR "metapage_write_end_io: I/O error\n");
if (folio->mapping)
mapping_set_error(folio->mapping, err);
} SourceFile:fs/jfs/jfs_metapage.c]Results: map[] |
| 357/4 | 2026/03/16 16:40 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 74673 output: 48 thoughts: 839 Thoughts: **Reviewing the Metapage Safety Checks** Okay, let's go through this one last time. I was initially concerned about potential null pointer dereferences, specifically with `folio->mapping` in the metapage code. First, I examined `metapage_write_one`. The code itself looks straightforward enough. I noted the crucial `filemap_check_errors(mapping)` call at the end, which could be problematic if `mapping` is NULL. However, I confirmed that `metapage_write_one` is only called from `force_metapage` and `release_metapage`, and I've already added checks in those functions to ensure `folio->mapping` is *not* NULL before calling `metapage_write_one`. Therefore, `metapage_write_one` itself is safe. Next, I turned my attention to `metapage_read_folio`. This is an address space operation, part of the page cache read path. I concluded that, because it's called by the VFS when reading from the page cache, the folio *must* be attached to a mapping at this point, so `folio->mapping` would never be NULL. Therefore, `metapage_read_folio` is safe as well. Then, I looked at `metapage_write_folio`. I saw it's called from both `metapage_writepages` and `metapage_write_one`. Since `metapage_writepages` uses `writeback_iter`, which skips folios with a NULL mapping, and `metapage_write_one` is only called when mapping is not NULL, `metapage_write_folio` is safe. So, the only location that potentially required a fix was `last_write_complete`. I've already implemented the fix there by adding a check before dereferencing `folio->mapping`. Now, I'll quickly review the changes I've made across the codebase to solidify my reasoning. 1. In both `insert_metapage` and `remove_metapage`, I switched the access from `folio->mapping->host->i_blkbits` to `mp->sb->s_blocksize_bits`, because that's what's actually needed. 2. I added an `if (folio->mapping)` check to `last_write_complete` before the call to `mapping_set_error`, to avoid the potential null pointer dereference. 3. I added checks in `force_metapage` and `release_metapage` to ensure `folio->mapping` is not NULL before calling `metapage_write_one`. Given this rigorous review, and after confirming that these changes address all potential vulnerabilities, I'm confident that the metapage code is now robust and safe. I'm ready to write the final reply. |
| 358/4 | 2026/03/16 16:40 | tool | codeexpert |
0mArgs:map[Question:Is there any other place in fs/jfs/jfs_metapage.c where folio->mapping is dereferenced without checking if it is NULL?] Results: map[Answer:The analysis of `fs/jfs/jfs_metapage.c` reveals several locations where `folio->mapping` is dereferenced without a prior NULL check. While folios in the page cache typically have a valid mapping, certain conditions (such as truncation or eviction) can lead to `folio->mapping` being NULL, potentially causing a kernel crash.
The following locations in `fs/jfs/jfs_metapage.c` dereference `folio->mapping` without checking for NULL:
1. **`insert_metapage()`** (Line 104, within the `#if MPS_PER_PAGE > 1` block):
```c
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
```
This dereferences `folio->mapping` to access the host inode's block bits.
2. **`remove_metapage()`** (Line 118, within the `#if MPS_PER_PAGE > 1` block):
```c
int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
```
Similar to `insert_metapage`, this dereferences `folio->mapping`.
3. **`last_write_complete()`** (Line 330):
```c
mapping_set_error(folio->mapping, err);
```
In the event of an I/O error (`status` is non-zero), `mapping_set_error` is called, which dereferences `folio->mapping`.
4. **`metapage_write_folio()`** (Line 355):
```c
struct inode *inode = folio->mapping->host;
```
The function immediately obtains the inode from the folio's mapping.
5. **`metapage_read_folio()`** (Line 466):
```c
struct inode *inode = folio->mapping->host;
```
Like the write path, the read path dereferences the mapping to get the inode.
6. **`metapage_write_one()`** (Line 625 and 645):
```c
struct address_space *mapping = folio->mapping;
...
ret = filemap_check_errors(mapping);
```
Although `folio->mapping` is first assigned to a local variable `mapping`, it is later passed to `filemap_check_errors()`, which dereferences it (specifically `mapping->flags`).
In summary, there are multiple places in `fs/jfs/jfs_metapage.c` where `folio->mapping` is assumed to be non-NULL. If these functions are called on a folio that has been detached from its address space, a NULL pointer dereference will occur.] |
| 359/5 | 2026/03/16 16:40 | agent | codeexpert |
0mModel:gemini-3-flash-preview Instruction: You are a capable Linux kernel developer tasked with researching complex questions about kernel source code. You will be given a concrete question, and need to provide a concrete answer. Use tools extensively while researching the question. Don't make assumptions, or rely on your previous knowledge about the kernel source code, use available tools to access the actual source code. Use all available sources of information: - kernel source code - documentation in the Documentation dir in the source tree - git commits descriptions, git blame Prefer calling several tools at the same time to save round-trips. Prompt: Is there any other place in fs/jfs/jfs_metapage.c where folio->mapping is dereferenced without checking if it is NULL? Reply: The analysis of `fs/jfs/jfs_metapage.c` reveals several locations where `folio->mapping` is dereferenced without a prior NULL check. While folios in the page cache typically have a valid mapping, certain conditions (such as truncation or eviction) can lead to `folio->mapping` being NULL, potentially causing a kernel crash.
The following locations in `fs/jfs/jfs_metapage.c` dereference `folio->mapping` without checking for NULL:
1. **`insert_metapage()`** (Line 104, within the `#if MPS_PER_PAGE > 1` block):
```c
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
```
This dereferences `folio->mapping` to access the host inode's block bits.
2. **`remove_metapage()`** (Line 118, within the `#if MPS_PER_PAGE > 1` block):
```c
int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
```
Similar to `insert_metapage`, this dereferences `folio->mapping`.
3. **`last_write_complete()`** (Line 330):
```c
mapping_set_error(folio->mapping, err);
```
In the event of an I/O error (`status` is non-zero), `mapping_set_error` is called, which dereferences `folio->mapping`.
4. **`metapage_write_folio()`** (Line 355):
```c
struct inode *inode = folio->mapping->host;
```
The function immediately obtains the inode from the folio's mapping.
5. **`metapage_read_folio()`** (Line 466):
```c
struct inode *inode = folio->mapping->host;
```
Like the write path, the read path dereferences the mapping to get the inode.
6. **`metapage_write_one()`** (Line 625 and 645):
```c
struct address_space *mapping = folio->mapping;
...
ret = filemap_check_errors(mapping);
```
Although `folio->mapping` is first assigned to a local variable `mapping`, it is later passed to `filemap_check_errors()`, which dereferences it (specifically `mapping->flags`).
In summary, there are multiple places in `fs/jfs/jfs_metapage.c` where `folio->mapping` is assumed to be non-NULL. If these functions are called on a folio that has been detached from its address space, a NULL pointer dereference will occur. |
| 360/6 | 2026/03/16 16:40 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 2286 output: 28 thoughts: 116 Thoughts: **Analyzing Potential NULL Pointer Dereferences in jfs_metapage.c** Okay, so the user's concerned about potential NULL pointer dereferences of `folio->mapping` within `fs/jfs/jfs_metapage.c`. Right, that's a classic area for errors, especially in complex filesystems. My immediate thought is, let's break this down systematically. First, I need to pinpoint every single instance where `folio->mapping` is actually *used* in this file. I'm imagining a quick `grep` or a similar search on the file contents will do the trick. That's step one: the locations. Then, for each of those locations, I've got to carefully examine the surrounding code. Are there any explicit checks for `folio->mapping` being NULL *before* it's dereferenced? That's the crux of the problem. If there aren't, then we have a potential vulnerability. It could be hidden within a function call that itself checks, but that also needs to be verified. I'm looking for a straightforward `if (folio->mapping != NULL)` or something equivalent. So, let's get the file contents, fire up the search, and let's get down to brass tacks. I'm going to carefully review each line, thinking about possible execution paths and all the implications. This should be interesting. |
| 361/6 | 2026/03/16 16:40 | tool | read-file |
0mArgs:map[File:fs/jfs/jfs_metapage.c] Results: map[Contents:// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright (C) International Business Machines Corp., 2000-2005
* Portions Copyright (C) Christoph Hellwig, 2001-2002
*/
#include <linux/blkdev.h>
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/bio.h>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/buffer_head.h>
#include <linux/mempool.h>
#include <linux/seq_file.h>
#include <linux/writeback.h>
#include <linux/migrate.h>
#include "jfs_incore.h"
#include "jfs_superblock.h"
#include "jfs_filsys.h"
#include "jfs_metapage.h"
#include "jfs_txnmgr.h"
#include "jfs_debug.h"
#ifdef CONFIG_JFS_STATISTICS
static struct {
uint pagealloc; /* # of page allocations */
uint pagefree; /* # of page frees */
uint lockwait; /* # of sleeping lock_metapage() calls */
} mpStat;
#endif
#define metapage_locked(mp) test_bit(META_locked, &(mp)->flag)
#define trylock_metapage(mp) test_and_set_bit_lock(META_locked, &(mp)->flag)
static inline void unlock_metapage(struct metapage *mp)
{
clear_bit_unlock(META_locked, &mp->flag);
wake_up(&mp->wait);
}
static inline void __lock_metapage(struct metapage *mp)
{
DECLARE_WAITQUEUE(wait, current);
INCREMENT(mpStat.lockwait);
add_wait_queue_exclusive(&mp->wait, &wait);
do {
set_current_state(TASK_UNINTERRUPTIBLE);
if (metapage_locked(mp)) {
folio_unlock(mp->folio);
io_schedule();
folio_lock(mp->folio);
}
} while (trylock_metapage(mp));
__set_current_state(TASK_RUNNING);
remove_wait_queue(&mp->wait, &wait);
}
/*
* Must have mp->folio locked
*/
static inline void lock_metapage(struct metapage *mp)
{
if (trylock_metapage(mp))
__lock_metapage(mp);
}
#define METAPOOL_MIN_PAGES 32
static struct kmem_cache *metapage_cache;
static mempool_t *metapage_mempool;
#define MPS_PER_PAGE (PAGE_SIZE >> L2PSIZE)
#if MPS_PER_PAGE > 1
struct meta_anchor {
int mp_count;
atomic_t io_count;
blk_status_t status;
struct metapage *mp[MPS_PER_PAGE];
};
static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
{
struct meta_anchor *anchor = folio->private;
if (!anchor)
return NULL;
return anchor->mp[offset >> L2PSIZE];
}
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a;
int index;
int l2mp_blocks; /* log2 blocks per metapage */
a = folio->private;
if (!a) {
a = kzalloc_obj(struct meta_anchor, GFP_NOFS);
if (!a)
return -ENOMEM;
folio_attach_private(folio, a);
kmap(&folio->page);
}
if (mp) {
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
a->mp_count++;
a->mp[index] = mp;
}
return 0;
}
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a = folio->private;
int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
int index;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
BUG_ON(a->mp[index] != mp);
a->mp[index] = NULL;
if (--a->mp_count == 0) {
kfree(a);
folio_detach_private(folio);
kunmap(&folio->page);
}
}
static inline void inc_io(struct folio *folio)
{
struct meta_anchor *anchor = folio->private;
atomic_inc(&anchor->io_count);
}
static inline void dec_io(struct folio *folio, blk_status_t status,
void (*handler)(struct folio *, blk_status_t))
{
struct meta_anchor *anchor = folio->private;
if (anchor->status == BLK_STS_OK)
anchor->status = status;
if (atomic_dec_and_test(&anchor->io_count))
handler(folio, anchor->status);
}
#ifdef CONFIG_MIGRATION
static int __metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
struct meta_anchor *src_anchor = src->private;
struct metapage *mps[MPS_PER_PAGE] = {0};
struct metapage *mp;
int i, rc;
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = src_anchor->mp[i];
if (mp && metapage_locked(mp))
return -EAGAIN;
}
rc = filemap_migrate_folio(mapping, dst, src, mode);
if (rc)
return rc;
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = src_anchor->mp[i];
if (!mp)
continue;
if (unlikely(insert_metapage(dst, mp))) {
/* If error, roll-back previosly inserted pages */
for (int j = 0 ; j < i; j++) {
if (mps[j])
remove_metapage(dst, mps[j]);
}
return -EAGAIN;
}
mps[i] = mp;
}
/* Update the metapage and remove it from src */
for (i = 0; i < MPS_PER_PAGE; i++) {
mp = mps[i];
if (mp) {
int page_offset = mp->data - folio_address(src);
mp->data = folio_address(dst) + page_offset;
mp->folio = dst;
remove_metapage(src, mp);
}
}
return 0;
}
#endif /* CONFIG_MIGRATION */
#else
static inline struct metapage *folio_to_mp(struct folio *folio, int offset)
{
return folio->private;
}
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
if (mp) {
folio_attach_private(folio, mp);
kmap(&folio->page);
}
return 0;
}
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
folio_detach_private(folio);
kunmap(&folio->page);
}
#define inc_io(folio) do {} while(0)
#define dec_io(folio, status, handler) handler(folio, status)
#ifdef CONFIG_MIGRATION
static int __metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
struct metapage *mp;
int page_offset;
int rc;
mp = folio_to_mp(src, 0);
if (metapage_locked(mp))
return -EAGAIN;
rc = filemap_migrate_folio(mapping, dst, src, mode);
if (rc)
return rc;
if (unlikely(insert_metapage(dst, mp)))
return -EAGAIN;
page_offset = mp->data - folio_address(src);
mp->data = folio_address(dst) + page_offset;
mp->folio = dst;
remove_metapage(src, mp);
return 0;
}
#endif /* CONFIG_MIGRATION */
#endif
static inline struct metapage *alloc_metapage(gfp_t gfp_mask)
{
struct metapage *mp = mempool_alloc(metapage_mempool, gfp_mask);
if (mp) {
mp->lid = 0;
mp->lsn = 0;
mp->data = NULL;
mp->clsn = 0;
mp->log = NULL;
init_waitqueue_head(&mp->wait);
}
return mp;
}
static inline void free_metapage(struct metapage *mp)
{
mempool_free(mp, metapage_mempool);
}
int __init metapage_init(void)
{
/*
* Allocate the metapage structures
*/
metapage_cache = kmem_cache_create("jfs_mp", sizeof(struct metapage),
0, 0, NULL);
if (metapage_cache == NULL)
return -ENOMEM;
metapage_mempool = mempool_create_slab_pool(METAPOOL_MIN_PAGES,
metapage_cache);
if (metapage_mempool == NULL) {
kmem_cache_destroy(metapage_cache);
return -ENOMEM;
}
return 0;
}
void metapage_exit(void)
{
mempool_destroy(metapage_mempool);
kmem_cache_destroy(metapage_cache);
}
static inline void drop_metapage(struct folio *folio, struct metapage *mp)
{
if (mp->count || mp->nohomeok || test_bit(META_dirty, &mp->flag) ||
test_bit(META_io, &mp->flag))
return;
remove_metapage(folio, mp);
INCREMENT(mpStat.pagefree);
free_metapage(mp);
}
/*
* Metapage address space operations
*/
static sector_t metapage_get_blocks(struct inode *inode, sector_t lblock,
int *len)
{
int rc = 0;
int xflag;
s64 xaddr;
sector_t file_blocks = (inode->i_size + inode->i_sb->s_blocksize - 1) >>
inode->i_blkbits;
if (lblock >= file_blocks)
return 0;
if (lblock + *len > file_blocks)
*len = file_blocks - lblock;
if (inode->i_ino) {
rc = xtLookup(inode, (s64)lblock, *len, &xflag, &xaddr, len, 0);
if ((rc == 0) && *len)
lblock = (sector_t)xaddr;
else
lblock = 0;
} /* else no mapping */
return lblock;
}
static void last_read_complete(struct folio *folio, blk_status_t status)
{
if (status)
printk(KERN_ERR "Read error %d at %#llx\n", status,
folio_pos(folio));
folio_end_read(folio, status == 0);
}
static void metapage_read_end_io(struct bio *bio)
{
struct folio *folio = bio->bi_private;
dec_io(folio, bio->bi_status, last_read_complete);
bio_put(bio);
}
static void remove_from_logsync(struct metapage *mp)
{
struct jfs_log *log = mp->log;
unsigned long flags;
/*
* This can race. Recheck that log hasn't been set to null, and after
* acquiring logsync lock, recheck lsn
*/
if (!log)
return;
LOGSYNC_LOCK(log, flags);
if (mp->lsn) {
mp->log = NULL;
mp->lsn = 0;
mp->clsn = 0;
log->count--;
list_del(&mp->synclist);
}
LOGSYNC_UNLOCK(log, flags);
}
static void last_write_complete(struct folio *folio, blk_status_t status)
{
struct metapage *mp;
unsigned int offset;
if (status) {
int err = blk_status_to_errno(status);
printk(KERN_ERR "metapage_write_end_io: I/O error\n");
mapping_set_error(folio->mapping, err);
}
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (mp && test_bit(META_io, &mp->flag)) {
if (mp->lsn)
remove_from_logsync(mp);
clear_bit(META_io, &mp->flag);
}
/*
* I'd like to call drop_metapage here, but I don't think it's
* safe unless I have the page locked
*/
}
folio_end_writeback(folio);
}
static void metapage_write_end_io(struct bio *bio)
{
struct folio *folio = bio->bi_private;
BUG_ON(!folio->private);
dec_io(folio, bio->bi_status, last_write_complete);
bio_put(bio);
}
static int metapage_write_folio(struct folio *folio,
struct writeback_control *wbc)
{
struct bio *bio = NULL;
int block_offset; /* block offset of mp within page */
struct inode *inode = folio->mapping->host;
int blocks_per_mp = JFS_SBI(inode->i_sb)->nbperpage;
int len;
int xlen;
struct metapage *mp;
int redirty = 0;
sector_t lblock;
int nr_underway = 0;
sector_t pblock;
sector_t next_block = 0;
sector_t page_start;
unsigned long bio_bytes = 0;
unsigned long bio_offset = 0;
int offset;
int bad_blocks = 0;
page_start = folio_pos(folio) >> inode->i_blkbits;
BUG_ON(!folio_test_locked(folio));
BUG_ON(folio_test_writeback(folio));
folio_start_writeback(folio);
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp || !test_bit(META_dirty, &mp->flag))
continue;
if (mp->nohomeok && !test_bit(META_forcewrite, &mp->flag)) {
redirty = 1;
/*
* Make sure this page isn't blocked indefinitely.
* If the journal isn't undergoing I/O, push it
*/
if (mp->log && !(mp->log->cflag & logGC_PAGEOUT))
jfs_flush_journal(mp->log, 0);
continue;
}
clear_bit(META_dirty, &mp->flag);
set_bit(META_io, &mp->flag);
block_offset = offset >> inode->i_blkbits;
lblock = page_start + block_offset;
if (bio) {
if (xlen && lblock == next_block) {
/* Contiguous, in memory & on disk */
len = min(xlen, blocks_per_mp);
xlen -= len;
bio_bytes += len << inode->i_blkbits;
continue;
}
/* Not contiguous */
bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
/*
* Increment counter before submitting i/o to keep
* count from hitting zero before we're through
*/
inc_io(folio);
if (!bio->bi_iter.bi_size)
goto dump_bio;
submit_bio(bio);
nr_underway++;
bio = NULL;
} else
inc_io(folio);
xlen = (folio_size(folio) - offset) >> inode->i_blkbits;
pblock = metapage_get_blocks(inode, lblock, &xlen);
if (!pblock) {
printk(KERN_ERR "JFS: metapage_get_blocks failed\n");
/*
* We already called inc_io(), but can't cancel it
* with dec_io() until we're done with the page
*/
bad_blocks++;
continue;
}
len = min(xlen, (int)JFS_SBI(inode->i_sb)->nbperpage);
bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_WRITE, GFP_NOFS);
bio->bi_iter.bi_sector = pblock << (inode->i_blkbits - 9);
bio->bi_end_io = metapage_write_end_io;
bio->bi_private = folio;
/* Don't call bio_add_page yet, we may add to this vec */
bio_offset = offset;
bio_bytes = len << inode->i_blkbits;
xlen -= len;
next_block = lblock + len;
}
if (bio) {
bio_add_folio_nofail(bio, folio, bio_bytes, bio_offset);
if (!bio->bi_iter.bi_size)
goto dump_bio;
submit_bio(bio);
nr_underway++;
}
if (redirty)
folio_redirty_for_writepage(wbc, folio);
folio_unlock(folio);
if (bad_blocks)
goto err_out;
if (nr_underway == 0)
folio_end_writeback(folio);
return 0;
dump_bio:
print_hex_dump(KERN_ERR, "JFS: dump of bio: ", DUMP_PREFIX_ADDRESS, 16,
4, bio, sizeof(*bio), 0);
bio_put(bio);
folio_unlock(folio);
dec_io(folio, BLK_STS_OK, last_write_complete);
err_out:
while (bad_blocks--)
dec_io(folio, BLK_STS_OK, last_write_complete);
return -EIO;
}
static int metapage_writepages(struct address_space *mapping,
struct writeback_control *wbc)
{
struct blk_plug plug;
struct folio *folio = NULL;
int err;
blk_start_plug(&plug);
while ((folio = writeback_iter(mapping, wbc, folio, &err)))
err = metapage_write_folio(folio, wbc);
blk_finish_plug(&plug);
return err;
}
static int metapage_read_folio(struct file *fp, struct folio *folio)
{
struct inode *inode = folio->mapping->host;
struct bio *bio = NULL;
int block_offset;
int blocks_per_page = i_blocks_per_folio(inode, folio);
sector_t page_start; /* address of page in fs blocks */
sector_t pblock;
int xlen;
unsigned int len;
int offset;
BUG_ON(!folio_test_locked(folio));
page_start = folio_pos(folio) >> inode->i_blkbits;
block_offset = 0;
while (block_offset < blocks_per_page) {
xlen = blocks_per_page - block_offset;
pblock = metapage_get_blocks(inode, page_start + block_offset,
&xlen);
if (pblock) {
if (!folio->private)
insert_metapage(folio, NULL);
inc_io(folio);
if (bio)
submit_bio(bio);
bio = bio_alloc(inode->i_sb->s_bdev, 1, REQ_OP_READ,
GFP_NOFS);
bio->bi_iter.bi_sector =
pblock << (inode->i_blkbits - 9);
bio->bi_end_io = metapage_read_end_io;
bio->bi_private = folio;
len = xlen << inode->i_blkbits;
offset = block_offset << inode->i_blkbits;
bio_add_folio_nofail(bio, folio, len, offset);
block_offset += xlen;
} else
block_offset++;
}
if (bio)
submit_bio(bio);
else
folio_unlock(folio);
return 0;
}
static bool metapage_release_folio(struct folio *folio, gfp_t gfp_mask)
{
struct metapage *mp;
bool ret = true;
int offset;
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp)
continue;
jfs_info("metapage_release_folio: mp = 0x%p", mp);
if (mp->count || mp->nohomeok ||
test_bit(META_dirty, &mp->flag)) {
jfs_info("count = %ld, nohomeok = %d", mp->count,
mp->nohomeok);
ret = false;
continue;
}
if (mp->lsn)
remove_from_logsync(mp);
remove_metapage(folio, mp);
INCREMENT(mpStat.pagefree);
free_metapage(mp);
}
return ret;
}
#ifdef CONFIG_MIGRATION
/*
* metapage_migrate_folio - Migration function for JFS metapages
*/
static int metapage_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
int expected_count;
if (!src->private)
return filemap_migrate_folio(mapping, dst, src, mode);
/* Check whether page does not have extra refs before we do more work */
expected_count = folio_expected_ref_count(src) + 1;
if (folio_ref_count(src) != expected_count)
return -EAGAIN;
return __metapage_migrate_folio(mapping, dst, src, mode);
}
#else
#define metapage_migrate_folio NULL
#endif /* CONFIG_MIGRATION */
static void metapage_invalidate_folio(struct folio *folio, size_t offset,
size_t length)
{
BUG_ON(offset || length < folio_size(folio));
BUG_ON(folio_test_writeback(folio));
metapage_release_folio(folio, 0);
}
const struct address_space_operations jfs_metapage_aops = {
.read_folio = metapage_read_folio,
.writepages = metapage_writepages,
.release_folio = metapage_release_folio,
.invalidate_folio = metapage_invalidate_folio,
.dirty_folio = filemap_dirty_folio,
.migrate_folio = metapage_migrate_folio,
};
struct metapage *__get_metapage(struct inode *inode, unsigned long lblock,
unsigned int size, int absolute,
unsigned long new)
{
int l2BlocksPerPage;
int l2bsize;
struct address_space *mapping;
struct metapage *mp = NULL;
struct folio *folio;
unsigned long page_index;
unsigned long page_offset;
jfs_info("__get_metapage: ino = %ld, lblock = 0x%lx, abs=%d",
inode->i_ino, lblock, absolute);
l2bsize = inode->i_blkbits;
l2BlocksPerPage = PAGE_SHIFT - l2bsize;
page_index = lblock >> l2BlocksPerPage;
page_offset = (lblock - (page_index << l2BlocksPerPage)) << l2bsize;
if ((page_offset + size) > PAGE_SIZE) {
jfs_err("MetaData crosses page boundary!!");
jfs_err("lblock = %lx, size = %d", lblock, size);
dump_stack();
return NULL;
}
if (absolute)
mapping = JFS_SBI(inode->i_sb)->direct_inode->i_mapping;
else {
/*
* If an nfs client tries to read an inode that is larger
* than any existing inodes, we may try to read past the
* end of the inode map
*/
if ((lblock << inode->i_blkbits) >= inode->i_size)
return NULL;
mapping = inode->i_mapping;
}
if (new && (PSIZE == PAGE_SIZE)) {
folio = filemap_grab_folio(mapping, page_index);
if (IS_ERR(folio)) {
jfs_err("filemap_grab_folio failed!");
return NULL;
}
folio_mark_uptodate(folio);
} else {
folio = read_mapping_folio(mapping, page_index, NULL);
if (IS_ERR(folio)) {
jfs_err("read_mapping_page failed!");
return NULL;
}
folio_lock(folio);
}
mp = folio_to_mp(folio, page_offset);
if (mp) {
if (mp->logical_size != size) {
jfs_error(inode->i_sb,
"get_mp->logical_size != size\n");
jfs_err("logical_size = %d, size = %d",
mp->logical_size, size);
dump_stack();
goto unlock;
}
mp->count++;
lock_metapage(mp);
if (test_bit(META_discard, &mp->flag)) {
if (!new) {
jfs_error(inode->i_sb,
"using a discarded metapage\n");
discard_metapage(mp);
goto unlock;
}
clear_bit(META_discard, &mp->flag);
}
} else {
INCREMENT(mpStat.pagealloc);
mp = alloc_metapage(GFP_NOFS);
if (!mp)
goto unlock;
mp->folio = folio;
mp->sb = inode->i_sb;
mp->flag = 0;
mp->xflag = COMMIT_PAGE;
mp->count = 1;
mp->nohomeok = 0;
mp->logical_size = size;
mp->data = folio_address(folio) + page_offset;
mp->index = lblock;
if (unlikely(insert_metapage(folio, mp))) {
free_metapage(mp);
goto unlock;
}
lock_metapage(mp);
}
if (new) {
jfs_info("zeroing mp = 0x%p", mp);
memset(mp->data, 0, PSIZE);
}
folio_unlock(folio);
jfs_info("__get_metapage: returning = 0x%p data = 0x%p", mp, mp->data);
return mp;
unlock:
folio_unlock(folio);
return NULL;
}
void grab_metapage(struct metapage * mp)
{
jfs_info("grab_metapage: mp = 0x%p", mp);
folio_get(mp->folio);
folio_lock(mp->folio);
mp->count++;
lock_metapage(mp);
folio_unlock(mp->folio);
}
static int metapage_write_one(struct folio *folio)
{
struct address_space *mapping = folio->mapping;
struct writeback_control wbc = {
.sync_mode = WB_SYNC_ALL,
.nr_to_write = folio_nr_pages(folio),
};
int ret = 0;
BUG_ON(!folio_test_locked(folio));
folio_wait_writeback(folio);
if (folio_clear_dirty_for_io(folio)) {
folio_get(folio);
ret = metapage_write_folio(folio, &wbc);
if (ret == 0)
folio_wait_writeback(folio);
folio_put(folio);
} else {
folio_unlock(folio);
}
if (!ret)
ret = filemap_check_errors(mapping);
return ret;
}
void force_metapage(struct metapage *mp)
{
struct folio *folio = mp->folio;
jfs_info("force_metapage: mp = 0x%p", mp);
set_bit(META_forcewrite, &mp->flag);
clear_bit(META_sync, &mp->flag);
folio_get(folio);
folio_lock(folio);
folio_mark_dirty(folio);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
clear_bit(META_forcewrite, &mp->flag);
folio_put(folio);
}
void hold_metapage(struct metapage *mp)
{
folio_lock(mp->folio);
}
void put_metapage(struct metapage *mp)
{
if (mp->count || mp->nohomeok) {
/* Someone else will release this */
folio_unlock(mp->folio);
return;
}
folio_get(mp->folio);
mp->count++;
lock_metapage(mp);
folio_unlock(mp->folio);
release_metapage(mp);
}
void release_metapage(struct metapage * mp)
{
struct folio *folio = mp->folio;
jfs_info("release_metapage: mp = 0x%p, flag = 0x%lx", mp, mp->flag);
folio_lock(folio);
unlock_metapage(mp);
assert(mp->count);
if (--mp->count || mp->nohomeok) {
folio_unlock(folio);
folio_put(folio);
return;
}
if (test_bit(META_dirty, &mp->flag)) {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
} else if (mp->lsn) /* discard_metapage doesn't remove it */
remove_from_logsync(mp);
/* Try to keep metapages from using up too much memory */
drop_metapage(folio, mp);
folio_unlock(folio);
folio_put(folio);
}
void __invalidate_metapages(struct inode *ip, s64 addr, int len)
{
sector_t lblock;
int l2BlocksPerPage = PAGE_SHIFT - ip->i_blkbits;
int BlocksPerPage = 1 << l2BlocksPerPage;
/* All callers are interested in block device's mapping */
struct address_space *mapping =
JFS_SBI(ip->i_sb)->direct_inode->i_mapping;
struct metapage *mp;
unsigned int offset;
/*
* Mark metapages to discard. They will eventually be
* released, but should not be written.
*/
for (lblock = addr & ~(BlocksPerPage - 1); lblock < addr + len;
lblock += BlocksPerPage) {
struct folio *folio = filemap_lock_folio(mapping,
lblock >> l2BlocksPerPage);
if (IS_ERR(folio))
continue;
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
mp = folio_to_mp(folio, offset);
if (!mp)
continue;
if (mp->index < addr)
continue;
if (mp->index >= addr + len)
break;
clear_bit(META_dirty, &mp->flag);
set_bit(META_discard, &mp->flag);
if (mp->lsn)
remove_from_logsync(mp);
}
folio_unlock(folio);
folio_put(folio);
}
}
#ifdef CONFIG_JFS_STATISTICS
int jfs_mpstat_proc_show(struct seq_file *m, void *v)
{
seq_printf(m,
"JFS Metapage statistics\n"
"=======================\n"
"page allocations = %d\n"
"page frees = %d\n"
"lock waits = %d\n",
mpStat.pagealloc,
mpStat.pagefree,
mpStat.lockwait);
return 0;
}
#endif
] |
| 362/6 | 2026/03/16 16:40 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 11312 output: 23 thoughts: 1595 Thoughts: **Analysis of `folio->mapping` Dereferences in `fs/jfs/jfs_metapage.c`** Okay, I've gone through the file `fs/jfs/jfs_metapage.c` and found a number of instances where `folio->mapping` is accessed. I've broken them down as requested. 1. **`insert_metapage` and `remove_metapage`**: In both functions, the code dereferences `folio->mapping` to access `host->i_blkbits`. The critical point is that no null check exists *before* this dereference. This could lead to a crash if the folio's mapping is uninitialized, particularly during certain error scenarios or truncation. 2. **`last_write_complete`**: Here, `folio->mapping` is passed directly to `mapping_set_error`. Once again, there's no null check. Now, given that this function is an I/O completion handler, I'm thinking that if the folio is removed from the page cache *while I/O is still in progress*, `folio->mapping` could very well be NULL at this point. This seems like a likely candidate for a bug. 3. **`metapage_write_folio` and `metapage_read_folio`**: In these functions, the code obtains the inode from `folio->mapping->host`. It's part of the standard `address_space_operations`, so ordinarily I wouldn't expect `folio->mapping` to be NULL. However, I need to keep in mind the potential for truncation and the possibility that this might be used in a callback that could see the folio in a transient state. No NULL check exists here either. 4. **`metapage_write_one`**: The code assigns `folio->mapping` to `mapping`, and then the variable `mapping` is passed to `filemap_check_errors`. Again, no null check prior to `filemap_check_errors` calling on `mapping`. Now, the important question is whether `folio->mapping` *can* be NULL in any of these contexts. I have a hunch that, while generally the page cache guarantees a valid mapping, situations during truncation, eviction, or I/O completion after removal from the page cache *could* result in a NULL pointer. Specifically, `last_write_complete` seems like the highest-risk area. I should probably go back and search the commit history to see if there have been any relevant fixes or known issues involving this. The fact that the question asks if there's *another* place where it's dereferenced without a null check suggests there may already be a known instance. |
| 363/6 | 2026/03/16 16:40 | tool | codesearch-dir-index |
0mArgs:map[Dir:fs/jfs/] Results: map[Files:[acl.c file.c inode.c ioctl.c jfs_acl.h jfs_btree.h jfs_debug.c jfs_debug.h jfs_dinode.h jfs_discard.c jfs_discard.h jfs_dmap.c jfs_dmap.h jfs_dtree.c jfs_dtree.h jfs_extent.c jfs_extent.h jfs_filsys.h jfs_imap.c jfs_imap.h jfs_incore.h jfs_inode.c jfs_inode.h jfs_lock.h jfs_logmgr.c jfs_logmgr.h jfs_metapage.c jfs_metapage.h jfs_mount.c jfs_superblock.h jfs_txnmgr.c jfs_txnmgr.h jfs_types.h jfs_umount.c jfs_unicode.c jfs_unicode.h jfs_xattr.h jfs_xtree.c jfs_xtree.h namei.c resize.c super.c symlink.c xattr.c] Subdirs:<nil>] |
| 364/6 | 2026/03/16 16:40 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 13926 output: 41 thoughts: 993 Thoughts: **Analyzing Folio Mapping and Potential Null Dereference** Okay, so I'm trying to figure out if there's a potential null pointer dereference related to `folio->mapping`, but I can't directly use `git log` or `grep` to quickly search. I'm going to have to manually inspect the code. First, I ruled out a direct `git log` approach, so I need to methodically check for usages of `folio->mapping`. I initially missed a few instances, so I'm going back to basics. I had identified the usage of `folio->mapping` in `i_blocks_per_folio` macro, but now I think to examine the places where `insert_metapage` and `remove_metapage` are called. I'm seeing that these functions, in turn, use `folio->mapping` to determine the calculation of `l2mp_blocks`. I see that `insert_metapage` takes it from: ```c l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits; ``` And `remove_metapage` takes it from: ```c int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits; ``` I've also noted the call sites of `insert_metapage` and `remove_metapage`, which are called from quite a few places including `__metapage_migrate_folio`, `__get_metapage`, `metapage_read_folio`, `drop_metapage` and `metapage_release_folio`. Critically, in `__metapage_migrate_folio`, the `mapping` is passed as an argument, so this should be safe. However, in that code, `filemap_migrate_folio` uses the `mapping` argument and `insert_metapage` and `remove_metapage` uses `folio->mapping`. For `dst`, it appears it should be set by the migration code. This made me realize I need to understand if `folio->mapping` can ever be NULL. I know that `metapage_release_folio` is called with the `.release_folio` operation, which might be called during folio eviction. I'm recalling that `release_folio` is generally called when the kernel wants to free the folio's private data before freeing the folio itself. If the folio is in the page cache, then it should have a valid mapping. However, I recall a recent JFS fix, so let's revisit `metapage_write_folio`, which also uses `folio->mapping`. Here: ```c struct inode *inode = folio->mapping->host; ``` If `folio->mapping` is NULL, this will cause a crash. Now I need to systematically search for *all* places where `folio->mapping` is used, since there is no `grep` or `git` to help, I will have to use manual code scanning or potentially something like `codesearch-find-references` (if possible). I can try to find references to `struct folio::mapping`. |
| 365/6 | 2026/03/16 16:40 | tool | codesearch-find-references |
0mArgs:map[Name:folio::mapping SourceTreePrefix:fs/jfs/jfs_metapage.c] Results: map[References:<nil> TruncatedOutput:false] |
| 366/6 | 2026/03/16 16:40 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 15664 output: 28 thoughts: 690 Thoughts: **Debugging a Potential NULL Dereference**
Okay, that initial approach didn't pan out; I need to change gears. Let's start with a focused string search for "mapping" within the file. Ah, right, I've already scanned the file. My mistake.
I need to focus on potential `folio->mapping` dereferences and see if they can be NULL. I'm looking at `metapage_read_folio` and immediately see it:
```c
static int metapage_read_folio(struct file *fp, struct folio *folio)
{
struct inode *inode = folio->mapping->host;
```
Interesting, it dereferences `folio->mapping` there.
Now, wait, here's another spot: `last_write_complete`:
```c
if (status) {
int err = blk_status_to_errno(status);
printk(KERN_ERR "metapage_write_end_io: I/O error\n");
mapping_set_error(folio->mapping, err);
}
```
If `status` indicates an error, `mapping_set_error(folio->mapping, err)` is called. And, `mapping_set_error` itself is:
```c
static inline void mapping_set_error(struct address_space *mapping, int error)
{
if (unlikely(error)) {
if (error == -ENOSPC)
set_bit(AS_ENOSPC, &mapping->flags);
else
set_bit(AS_EIO, &mapping->flags);
}
}
```
So it also dereferences `mapping`. Can `folio->mapping` *actually* be NULL here? Let's think through this. `last_write_complete` is called from `metapage_write_end_io` which in turn is a `bio->bi_end_io` handler. The folio was under writeback. Folios undergoing writeback should have their mapping, usually, until the writeback is complete.
But wait, I recall seeing something similar in another filesystem or an older version of JFS... yes, here it is in `insert_metapage`.
```c
#if MPS_PER_PAGE > 1
// ...
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
// ...
if (mp) {
l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
```
Ah-ha! If `MPS_PER_PAGE == 1`, the definition of `insert_metapage` changes:
```c
#else
// ...
static inline int insert_metapage(struct folio *folio, struct metapage *mp)
{
if (mp) {
folio_attach_private(folio, mp);
kmap(&folio->page);
}
return 0;
}
```
Here, it *doesn't* use `folio->mapping`. Therefore, the potential NULL dereference is *only* when `MPS_PER_PAGE > 1`. Now, what is `MPS_PER_PAGE`? That is `PAGE_SIZE >> L2PSIZE`. Time to look at `jfs_metapage.h` to check `L2PSIZE`. I need to figure out under what conditions `MPS_PER_PAGE > 1`.
|
| 367/6 | 2026/03/16 16:40 | tool | read-file |
0mArgs:map[File:fs/jfs/jfs_metapage.h] Results: map[Contents:/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) International Business Machines Corp., 2000-2002
* Portions Copyright (C) Christoph Hellwig, 2001-2002
*/
#ifndef _H_JFS_METAPAGE
#define _H_JFS_METAPAGE
#include <linux/pagemap.h>
struct metapage {
/* Common logsyncblk prefix (see jfs_logmgr.h) */
u16 xflag;
u16 unused;
lid_t lid;
int lsn;
struct list_head synclist;
/* End of logsyncblk prefix */
unsigned long flag; /* See Below */
unsigned long count; /* Reference count */
void *data; /* Data pointer */
sector_t index; /* block address of page */
wait_queue_head_t wait;
/* implementation */
struct folio *folio;
struct super_block *sb;
unsigned int logical_size;
/* Journal management */
int clsn;
int nohomeok;
struct jfs_log *log;
};
/* metapage flag */
#define META_locked 0
#define META_dirty 2
#define META_sync 3
#define META_discard 4
#define META_forcewrite 5
#define META_io 6
#define mark_metapage_dirty(mp) set_bit(META_dirty, &(mp)->flag)
/* function prototypes */
extern int metapage_init(void);
extern void metapage_exit(void);
extern struct metapage *__get_metapage(struct inode *inode,
unsigned long lblock, unsigned int size,
int absolute, unsigned long new);
#define read_metapage(inode, lblock, size, absolute)\
__get_metapage(inode, lblock, size, absolute, false)
#define get_metapage(inode, lblock, size, absolute)\
__get_metapage(inode, lblock, size, absolute, true)
extern void release_metapage(struct metapage *);
extern void grab_metapage(struct metapage *);
extern void force_metapage(struct metapage *);
/*
* hold_metapage and put_metapage are used in conjunction. The page lock
* is not dropped between the two, so no other threads can get or release
* the metapage
*/
extern void hold_metapage(struct metapage *);
extern void put_metapage(struct metapage *);
static inline void write_metapage(struct metapage *mp)
{
set_bit(META_dirty, &mp->flag);
release_metapage(mp);
}
static inline void flush_metapage(struct metapage *mp)
{
set_bit(META_sync, &mp->flag);
write_metapage(mp);
}
static inline void discard_metapage(struct metapage *mp)
{
clear_bit(META_dirty, &mp->flag);
set_bit(META_discard, &mp->flag);
release_metapage(mp);
}
static inline void metapage_nohomeok(struct metapage *mp)
{
struct folio *folio = mp->folio;
folio_lock(folio);
if (!mp->nohomeok++) {
mark_metapage_dirty(mp);
folio_get(folio);
folio_wait_writeback(folio);
}
folio_unlock(folio);
}
/*
* This serializes access to mp->lsn when metapages are added to logsynclist
* without setting nohomeok. i.e. updating imap & dmap
*/
static inline void metapage_wait_for_io(struct metapage *mp)
{
if (test_bit(META_io, &mp->flag))
folio_wait_writeback(mp->folio);
}
/*
* This is called when already holding the metapage
*/
static inline void _metapage_homeok(struct metapage *mp)
{
if (!--mp->nohomeok)
folio_put(mp->folio);
}
static inline void metapage_homeok(struct metapage *mp)
{
hold_metapage(mp);
_metapage_homeok(mp);
put_metapage(mp);
}
extern const struct address_space_operations jfs_metapage_aops;
/*
* This routines invalidate all pages for an extent.
*/
extern void __invalidate_metapages(struct inode *, s64, int);
#define invalidate_pxd_metapages(ip, pxd) \
__invalidate_metapages((ip), addressPXD(&(pxd)), lengthPXD(&(pxd)))
#define invalidate_dxd_metapages(ip, dxd) \
__invalidate_metapages((ip), addressDXD(&(dxd)), lengthDXD(&(dxd)))
#define invalidate_xad_metapages(ip, xad) \
__invalidate_metapages((ip), addressXAD(&(xad)), lengthXAD(&(xad)))
#endif /* _H_JFS_METAPAGE */
] |
| 368/6 | 2026/03/16 16:40 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 18442 output: 27 thoughts: 44 Thoughts: Okay, here's my take on those thoughts, tailored for an expert audience: **Investigating the L2PSIZE Macro** Hmm, I'm just looking through `jfs_metapage.h`, and I can't seem to find the definition for `L2PSIZE`. That's a bit odd. I was expecting it to be declared here, given the context. Let me make a mental note: it's not present where I anticipated. Alright, since it's not in the metapage header, it's reasonable to assume it's located in the broader file system definitions. My next instinct is to check `jfs_filsys.h`. That's where I'd expect to find global constants and size definitions related to the JFS file system as a whole. Time to dive into that one and see if it's defined there. |
| 369/6 | 2026/03/16 16:40 | tool | read-file |
0mArgs:map[File:fs/jfs/jfs_filsys.h] Results: map[Contents:/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * Copyright (C) International Business Machines Corp., 2000-2003 */ #ifndef _H_JFS_FILSYS #define _H_JFS_FILSYS /* * jfs_filsys.h * * file system (implementation-dependent) constants * * refer to <limits.h> for system wide implementation-dependent constants */ /* * file system option (superblock flag) */ /* directory option */ #define JFS_UNICODE 0x00000001 /* unicode name */ /* mount time flags for error handling */ #define JFS_ERR_REMOUNT_RO 0x00000002 /* remount read-only */ #define JFS_ERR_CONTINUE 0x00000004 /* continue */ #define JFS_ERR_PANIC 0x00000008 /* panic */ #define JFS_ERR_MASK (JFS_ERR_REMOUNT_RO|JFS_ERR_CONTINUE|JFS_ERR_PANIC) /* Quota support */ #define JFS_USRQUOTA 0x00000010 #define JFS_GRPQUOTA 0x00000020 /* mount time flag to disable journaling to disk */ #define JFS_NOINTEGRITY 0x00000040 /* mount time flag to enable TRIM to ssd disks */ #define JFS_DISCARD 0x00000080 /* commit option */ #define JFS_COMMIT 0x00000f00 /* commit option mask */ #define JFS_GROUPCOMMIT 0x00000100 /* group (of 1) commit */ #define JFS_LAZYCOMMIT 0x00000200 /* lazy commit */ #define JFS_TMPFS 0x00000400 /* temporary file system - * do not log/commit: * Never implemented */ /* log logical volume option */ #define JFS_INLINELOG 0x00000800 /* inline log within file system */ #define JFS_INLINEMOVE 0x00001000 /* inline log being moved */ /* Secondary aggregate inode table */ #define JFS_BAD_SAIT 0x00010000 /* current secondary ait is bad */ /* sparse regular file support */ #define JFS_SPARSE 0x00020000 /* sparse regular file */ /* DASD Limits F226941 */ #define JFS_DASD_ENABLED 0x00040000 /* DASD limits enabled */ #define JFS_DASD_PRIME 0x00080000 /* Prime DASD usage on boot */ /* big endian flag */ #define JFS_SWAP_BYTES 0x00100000 /* running on big endian computer */ /* Directory index */ #define JFS_DIR_INDEX 0x00200000 /* Persistent index for */ /* platform options */ #define JFS_LINUX 0x10000000 /* Linux support */ #define JFS_DFS 0x20000000 /* DCE DFS LFS support */ /* Never implemented */ #define JFS_OS2 0x40000000 /* OS/2 support */ /* case-insensitive name/directory support */ #define JFS_AIX 0x80000000 /* AIX support */ /* * buffer cache configuration */ /* page size */ #ifdef PSIZE #undef PSIZE #endif #define PSIZE 4096 /* page size (in byte) */ #define L2PSIZE 12 /* log2(PSIZE) */ #define POFFSET 4095 /* offset within page */ /* buffer page size */ #define BPSIZE PSIZE /* * fs fundamental size * * PSIZE >= file system block size >= PBSIZE >= DISIZE */ #define PBSIZE 512 /* physical block size (in byte) */ #define L2PBSIZE 9 /* log2(PBSIZE) */ #define DISIZE 512 /* on-disk inode size (in byte) */ #define L2DISIZE 9 /* log2(DISIZE) */ #define IDATASIZE 256 /* inode inline data size */ #define IXATTRSIZE 128 /* inode inline extended attribute size */ #define XTPAGE_SIZE 4096 #define log2_PAGESIZE 12 #define IAG_SIZE 4096 #define IAG_EXTENT_SIZE 4096 #define INOSPERIAG 4096 /* number of disk inodes per iag */ #define L2INOSPERIAG 12 /* l2 number of disk inodes per iag */ #define INOSPEREXT 32 /* number of disk inode per extent */ #define L2INOSPEREXT 5 /* l2 number of disk inode per extent */ #define IXSIZE (DISIZE * INOSPEREXT) /* inode extent size */ #define INOSPERPAGE 8 /* number of disk inodes per 4K page */ #define L2INOSPERPAGE 3 /* log2(INOSPERPAGE) */ #define IAGFREELIST_LWM 64 #define INODE_EXTENT_SIZE IXSIZE /* inode extent size */ #define NUM_INODE_PER_EXTENT INOSPEREXT #define NUM_INODE_PER_IAG INOSPERIAG #define MINBLOCKSIZE 512 #define L2MINBLOCKSIZE 9 #define MAXBLOCKSIZE 4096 #define L2MAXBLOCKSIZE 12 #define MAXFILESIZE ((s64)1 << 52) #define JFS_LINK_MAX 0xffffffff /* Minimum number of bytes supported for a JFS partition */ #define MINJFS (0x1000000) #define MINJFSTEXT "16" /* * file system block size -> physical block size */ #define LBOFFSET(x) ((x) & (PBSIZE - 1)) #define LBNUMBER(x) ((x) >> L2PBSIZE) #define LBLK2PBLK(sb,b) ((b) << (sb->s_blocksize_bits - L2PBSIZE)) #define PBLK2LBLK(sb,b) ((b) >> (sb->s_blocksize_bits - L2PBSIZE)) /* size in byte -> last page number */ #define SIZE2PN(size) ( ((s64)((size) - 1)) >> (L2PSIZE) ) /* size in byte -> last file system block number */ #define SIZE2BN(size, l2bsize) ( ((s64)((size) - 1)) >> (l2bsize) ) /* * fixed physical block address (physical block size = 512 byte) * * NOTE: since we can't guarantee a physical block size of 512 bytes the use of * these macros should be removed and the byte offset macros used instead. */ #define SUPER1_B 64 /* primary superblock */ #define AIMAP_B (SUPER1_B + 8) /* 1st extent of aggregate inode map */ #define AITBL_B (AIMAP_B + 16) /* * 1st extent of aggregate inode table */ #define SUPER2_B (AITBL_B + 32) /* 2ndary superblock pbn */ #define BMAP_B (SUPER2_B + 8) /* block allocation map */ /* * SIZE_OF_SUPER defines the total amount of space reserved on disk for the * superblock. This is not the same as the superblock structure, since all of * this space is not currently being used. */ #define SIZE_OF_SUPER PSIZE /* * SIZE_OF_AG_TABLE defines the amount of space reserved to hold the AG table */ #define SIZE_OF_AG_TABLE PSIZE /* * SIZE_OF_MAP_PAGE defines the amount of disk space reserved for each page of * the inode allocation map (to hold iag) */ #define SIZE_OF_MAP_PAGE PSIZE /* * fixed byte offset address */ #define SUPER1_OFF 0x8000 /* primary superblock */ #define AIMAP_OFF (SUPER1_OFF + SIZE_OF_SUPER) /* * Control page of aggregate inode map * followed by 1st extent of map */ #define AITBL_OFF (AIMAP_OFF + (SIZE_OF_MAP_PAGE << 1)) /* * 1st extent of aggregate inode table */ #define SUPER2_OFF (AITBL_OFF + INODE_EXTENT_SIZE) /* * secondary superblock */ #define BMAP_OFF (SUPER2_OFF + SIZE_OF_SUPER) /* * block allocation map */ /* * The following macro is used to indicate the number of reserved disk blocks at * the front of an aggregate, in terms of physical blocks. This value is * currently defined to be 32K. This turns out to be the same as the primary * superblock's address, since it directly follows the reserved blocks. */ #define AGGR_RSVD_BLOCKS SUPER1_B /* * The following macro is used to indicate the number of reserved bytes at the * front of an aggregate. This value is currently defined to be 32K. This * turns out to be the same as the primary superblock's byte offset, since it * directly follows the reserved blocks. */ #define AGGR_RSVD_BYTES SUPER1_OFF /* * The following macro defines the byte offset for the first inode extent in * the aggregate inode table. This allows us to find the self inode to find the * rest of the table. Currently this value is 44K. */ #define AGGR_INODE_TABLE_START AITBL_OFF /* * fixed reserved inode number */ /* aggregate inode */ #define AGGR_RESERVED_I 0 /* aggregate inode (reserved) */ #define AGGREGATE_I 1 /* aggregate inode map inode */ #define BMAP_I 2 /* aggregate block allocation map inode */ #define LOG_I 3 /* aggregate inline log inode */ #define BADBLOCK_I 4 /* aggregate bad block inode */ #define FILESYSTEM_I 16 /* 1st/only fileset inode in ait: * fileset inode map inode */ /* per fileset inode */ #define FILESET_RSVD_I 0 /* fileset inode (reserved) */ #define FILESET_EXT_I 1 /* fileset inode extension */ #define ROOT_I 2 /* fileset root inode */ #define ACL_I 3 /* fileset ACL inode */ #define FILESET_OBJECT_I 4 /* the first fileset inode available for a file * or directory or link... */ #define FIRST_FILESET_INO 16 /* the first aggregate inode which describes * an inode. (To fsck this is also the first * inode in part 2 of the agg inode table.) */ /* * directory configuration */ #define JFS_NAME_MAX 255 #define JFS_PATH_MAX BPSIZE /* * file system state (superblock state) */ #define FM_CLEAN 0x00000000 /* file system is unmounted and clean */ #define FM_MOUNT 0x00000001 /* file system is mounted cleanly */ #define FM_DIRTY 0x00000002 /* file system was not unmounted and clean * when mounted or * commit failure occurred while being mounted: * fsck() must be run to repair */ #define FM_LOGREDO 0x00000004 /* log based recovery (logredo()) failed: * fsck() must be run to repair */ #define FM_EXTENDFS 0x00000008 /* file system extendfs() in progress */ #define FM_STATE_MAX 0x0000000f /* max value of s_state */ #endif /* _H_JFS_FILSYS */ ] |
| 370/6 | 2026/03/16 16:40 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 21847 output: 41 thoughts: 855 Thoughts: **Analyzing `jfs_metapage.c` for Potential NULL Pointer Dereferences** Okay, so I'm diving into this `jfs_metapage.c` code, specifically looking for instances where `folio->mapping` is dereferenced without a preceding null check. The goal is to identify potential vulnerabilities. The starting point is to understand the page size setup. `PSIZE` is 4096 and `L2PSIZE` is 12, so the base `PAGE_SIZE` is, as expected, 4096 bytes on many architectures like x86, implying `MPS_PER_PAGE` is 1. However, if `PAGE_SIZE` is larger (e.g., 16K or 64K), `MPS_PER_PAGE` becomes greater than 1, meaning the code under `#if MPS_PER_PAGE > 1` handles large page architectures. My initial scan identified a few suspicious locations: `insert_metapage` (line 104), `remove_metapage` (line 118), `last_write_complete` (line 330), `metapage_write_folio` (line 355), `metapage_read_folio` (line 466), and `metapage_write_one` (line 625). Let's zoom in on `metapage_write_one`: it *looks* okay, because while it dereferences `folio->mapping` when assigning it to the local variable `mapping`, it subsequently uses `mapping` in `filemap_check_errors(mapping)`. But if `mapping` *is* NULL (because `folio->mapping` is NULL), then the call `filemap_check_errors(NULL)` will crash the program. So that's one definite potential problem. Next, I glanced at `__get_metapage`. While it does use `mapping`, that's a local variable set from `inode->i_mapping`, so it's not a direct dereference of `folio->mapping`. Now, reviewing `metapage_write_folio`, I see it uses `folio->mapping` to access the `host` member: `struct inode *inode = folio->mapping->host;` This is a direct dereference. That's another potential issue. Okay, time for a more methodical scan, rechecking every instance of `->mapping`. Now, I'm doing a more careful search, and here's what I've found: * Line 104: `folio->mapping->host->i_blkbits` - Potential dereference. * Line 118: `folio->mapping->host->i_blkbits` - Potential dereference. * Line 330: `mapping_set_error(folio->mapping, err)` - Potential dereference. * Line 355: `folio->mapping->host` - Potential dereference. * Line 466: `folio->mapping->host` - Potential dereference. * Line 625: `folio->mapping` (assigned to `mapping`) - Potential dereference. However, in `metapage_migrate_folio`, while there is the use of `mapping` it's not `folio->mapping`. A crucial check, is if the macro `folio_mapping(folio)` expands to the use of `folio->mapping`. Now, I'll search for `folio_mapping`. |
| 371/6 | 2026/03/16 16:40 | tool | codesearch-find-references |
0mArgs:map[Name:folio_mapping SourceTreePrefix:fs/jfs/jfs_metapage.c] Results: map[References:<nil> TruncatedOutput:false] |
| 372/6 | 2026/03/16 16:40 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 23535 output: 39 thoughts: 4076 Thoughts: **Analysis of `folio->mapping` Usage in `fs/jfs/jfs_metapage.c`** Alright, let's break this down. My primary concern is pinpointing where `folio->mapping` is dereferenced without a preceding null check, leading to potential crashes. I've thoroughly combed through the code, focusing on the instances where `folio->mapping` is accessed directly. My initial thought was to use `page->mapping`, but the code uses `folio`. I searched for `->mapping` throughout `jfs_metapage.c`. I've identified several locations where a crash is possible if `folio->mapping` is NULL. Primarily, these are: 1. **`insert_metapage`**: Specifically, within the `MPS_PER_PAGE > 1` block of code, the `folio->mapping->host->i_blkbits` is accessed without a NULL check on `folio->mapping` when `mp` is NOT NULL. 2. **`remove_metapage`**: Similar to `insert_metapage`, this occurs only if `MPS_PER_PAGE > 1`. 3. **`last_write_complete`**: `folio->mapping` is used in a call to `mapping_set_error()`. 4. **`metapage_write_folio`**: The inode is retrieved via `folio->mapping->host`. 5. **`metapage_read_folio`**: Similar to `metapage_write_folio`, the inode is retrieved via `folio->mapping->host`. 6. **`metapage_write_one`**: `folio->mapping` is assigned to a local variable `mapping` and then used in a call to `filemap_check_errors()`. I also checked other functions that use `folio` and `mapping`, such as: * `__metapage_migrate_folio` * `metapage_writepages` * `metapage_read_folio` * `force_metapage` * `release_metapage` * `__get_metapage` The code calls `insert_metapage` from `__get_metapage` and `metapage_read_folio`, and `metapage_read_folio` calls `insert_metapage` with a NULL `mp` which avoids the dereference of `folio->mapping`. Now, considering the context of how these folios are obtained (often from the page cache, using functions like `filemap_grab_folio`, or via `read_mapping_folio`), the scenarios where `folio->mapping` *should* be NULL are rare. Based on the code analysis, a direct dereference of `folio->mapping` is possible without a NULL check in several locations, especially if folios are somehow created or used in a manner inconsistent with the expected file system operation. I'd need further context to decide if it is a real vulnerability. Based on my experience I would suspect that one of these places, maybe `insert_metapage` or `remove_metapage`, has a bug. |
| 373/6 | 2026/03/16 16:40 | tool | codesearch-find-references |
0mError:requested entity does not exist Args: map[Name:mapping SourceTreePrefix:fs/jfs/jfs_metapage.c] Results: map[References:<nil> TruncatedOutput:false] |
| 374/6 | 2026/03/16 16:40 | llm | codeexpert |
0mModel:gemini-3-flash-preview Tokens: input: 28366 output: 645 thoughts: 62912 |
| 375/4 | 2026/03/16 16:40 | llm | patch-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 76839 output: 693 thoughts: 7 Thoughts: Okay, I'm ready. I understand the task is to summarize the following text as though the thoughts described are my own, speaking in the first person, and crafting a detailed and fulsome summary. I'll approach this as an expert in my field would, analyzing the implicit meaning and context within the provided phrase. Here we go. **My Analysis: The Implication of "I'm Done"** This is fascinating. The phrase "Okay, I'm done" isn't just a simple statement of completion. As an expert, I immediately start dissecting the nuances. What *exactly* is done? What was the prior activity? Without context, the meaning is incredibly broad, which makes it all the more intriguing to explore potential implications. My initial interpretation: * **Completion of a Task:** The most obvious interpretation is the cessation of work on a specific project, be it research, a manuscript, a presentation, or even something seemingly mundane. The "done" signifies a satisfying endpoint, or perhaps a pragmatic decision to end. It's finality, closure. * **Mental Exhaustion or Decision:** It could also suggest a decision to stop, perhaps due to reaching an impasse, encountering intractable problems, or simply mental fatigue. There's an undertone of having exhausted resources, be they intellectual, physical, or temporal. * **Submission or Acceptance:** In a less literal sense, "done" could represent a form of submission or acceptance of a situation. Maybe I've reached a conclusion, made a decision, or decided that I can no longer make a difference in something. "I'm done" becomes a concise statement of acceptance, resignation, or the recognition of an achieved purpose. * **A Pivot:** The conclusion can also signify the intention to shift focus to a different endeavor or phase. It is an end, but also a beginning. It could be an acknowledgment of completion in this phase, and therefore, an open field in which to continue the work in another direction. The beauty of this simple phrase is its ambiguity. It forces me, as an expert, to consider a variety of possible underlying contexts, each of which colors the meaning and offers insight into the "thought process" behind its utterance. It’s a challenge to create a fulsome and exhaustive summary given the limitations. |
| 376/3 | 2026/03/16 16:40 | action | test-patch |
9mResults:map[PatchDiff:diff --git a/fs/jfs/jfs_metapage.c b/fs/jfs/jfs_metapage.c
index 64c6eaa7f..16ed69d10 100644
--- a/fs/jfs/jfs_metapage.c
+++ b/fs/jfs/jfs_metapage.c
@@ -106,7 +106,7 @@ static inline int insert_metapage(struct folio *folio, struct metapage *mp)
}
if (mp) {
- l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
+ l2mp_blocks = L2PSIZE - mp->sb->s_blocksize_bits;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
a->mp_count++;
a->mp[index] = mp;
@@ -118,7 +118,7 @@ static inline int insert_metapage(struct folio *folio, struct metapage *mp)
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a = folio->private;
- int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
+ int l2mp_blocks = L2PSIZE - mp->sb->s_blocksize_bits;
int index;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
@@ -392,7 +392,8 @@ static void last_write_complete(struct folio *folio, blk_status_t status)
if (status) {
int err = blk_status_to_errno(status);
printk(KERN_ERR "metapage_write_end_io: I/O error\n");
- mapping_set_error(folio->mapping, err);
+ if (folio->mapping)
+ mapping_set_error(folio->mapping, err);
}
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
@@ -836,9 +837,14 @@ void force_metapage(struct metapage *mp)
clear_bit(META_sync, &mp->flag);
folio_get(folio);
folio_lock(folio);
- folio_mark_dirty(folio);
- if (metapage_write_one(folio))
- jfs_error(mp->sb, "metapage_write_one() failed\n");
+ if (folio->mapping) {
+ folio_mark_dirty(folio);
+ if (metapage_write_one(folio))
+ jfs_error(mp->sb, "metapage_write_one() failed\n");
+ } else {
+ clear_bit(META_dirty, &mp->flag);
+ folio_unlock(folio);
+ }
clear_bit(META_forcewrite, &mp->flag);
folio_put(folio);
}
@@ -877,16 +883,25 @@ void release_metapage(struct metapage * mp)
return;
}
- if (test_bit(META_dirty, &mp->flag)) {
- folio_mark_dirty(folio);
- if (test_bit(META_sync, &mp->flag)) {
- clear_bit(META_sync, &mp->flag);
- if (metapage_write_one(folio))
- jfs_error(mp->sb, "metapage_write_one() failed\n");
- folio_lock(folio);
- }
- } else if (mp->lsn) /* discard_metapage doesn't remove it */
- remove_from_logsync(mp);
+ if (folio->mapping) {
+ if (test_bit(META_dirty, &mp->flag)) {
+ folio_mark_dirty(folio);
+ if (test_bit(META_sync, &mp->flag)) {
+ clear_bit(META_sync, &mp->flag);
+ if (metapage_write_one(folio))
+ jfs_error(
+ mp->sb,
+ "metapage_write_one() failed\n");
+ folio_lock(folio);
+ }
+ } else if (mp->lsn) /* discard_metapage doesn't remove it */
+ remove_from_logsync(mp);
+ } else {
+ clear_bit(META_dirty, &mp->flag);
+ clear_bit(META_sync, &mp->flag);
+ if (mp->lsn)
+ remove_from_logsync(mp);
+ }
/* Try to keep metapages from using up too much memory */
drop_metapage(folio, mp);
TestError:] |
| 377/1 | 2026/03/16 16:50 | action | get-maintainers |
0mResults:map[Recipients:[map[Email:jfs-discussion@lists.sourceforge.net Name: To:true] map[Email:linux-kernel@vger.kernel.org Name: To:false] map[Email:shaggy@kernel.org Name:Dave Kleikamp To:true]]] |
| 378/1 | 2026/03/16 16:50 | action | get-recent-commits |
0mResults:map[RecentCommits:treewide: Replace kmalloc with kmalloc_obj for non-scalar types treewide: remove MIGRATEPAGE_SUCCESS jfs: stop using write_cache_pages jfs: implement migrate_folio for jfs_metapage_aops jfs: Remove use of folio error flag jfs: Change metapage->page to metapage->folio jfs: Convert force_metapage to use a folio jfs: Convert inc_io to take a folio jfs: Convert page_to_mp to folio_to_mp jfs; Convert __invalidate_metapages to use a folio jfs: Convert dec_io to take a folio jfs: Convert drop_metapage and remove_metapage to take a folio jfs; Convert release_metapage to use a folio jfs: Convert insert_metapage() to take a folio jfs: Convert __get_metapage to use a folio jfs: Convert metapage_writepage to metapage_write_folio jfs: Convert metapage_read_folio to use folio APIs mm,jfs: move write_one_page/folio_write_one to jfs jfs: Remove check for PageUptodate jfs: Convert to release_folio ] |
| 379/1 | 2026/03/16 16:50 | agent | description-generator |
0mModel:gemini-3.1-pro-preview Instruction: You are an experienced Linux kernel developer tasked with writing a commit description for a kernel bug fixing commit. The description should start with a one-line summary, and then include description of the bug being fixed, and how it's fixed by the provided patch. Your final reply should contain only the text of the commit description. Phrase the one-line summary so that it is not longer than 72 characters. The rest of the description must be word-wrapped at 72 characters. Prefer calling several tools at the same time to save round-trips. Prompt:
The crash that corresponds to the bug is:
Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN NOPTI
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
CPU: 1 UID: 0 PID: 121 Comm: jfsCommit Not tainted syzkaller #1 PREEMPT_{RT,(full)}
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.17.0-debian-1.17.0-1 04/01/2014
RIP: 0010:metapage_write_folio+0xa8/0xca0 fs/jfs/jfs_metapage.c:428
Code: e8 cd 34 78 fe 4d 8d 74 24 18 4c 89 f0 48 c1 e8 03 42 80 3c 38 00 74 08 4c 89 f7 e8 82 91 dc fe 4d 8b 36 4c 89 f0 48 c1 e8 03 <42> 80 3c 38 00 74 08 4c 89 f7 e8 69 91 dc fe 49 8b 1e 4d 8d 74 24
RSP: 0018:ffffc90002197840 EFLAGS: 00010256
RAX: 0000000000000000 RBX: 1ffffd400023200e RCX: ffff88801e04bd80
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc90002197950 R08: ffffea0001190077 R09: 1ffffd400023200e
R10: dffffc0000000000 R11: fffff9400023200f R12: ffffea0001190040
R13: 0000000000000081 R14: 0000000000000000 R15: dffffc0000000000
FS: 0000000000000000(0000) GS:ffff8880ecaf3000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f1b44e236e0 CR3: 000000000d9be000 CR4: 0000000000750ef0
PKRU: 55555554
Call Trace:
<TASK>
metapage_write_one+0x24c/0x450 fs/jfs/jfs_metapage.c:818
release_metapage+0x5b9/0xab0 fs/jfs/jfs_metapage.c:884
txUnlock+0x509/0xdc0 fs/jfs/jfs_txnmgr.c:948
txLazyCommit fs/jfs/jfs_txnmgr.c:2683 [inline]
jfs_lazycommit+0x52e/0xa10 fs/jfs/jfs_txnmgr.c:2734
kthread+0x389/0x480 kernel/kthread.c:436
ret_from_fork+0x513/0xba0 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
Modules linked in:
---[ end trace 0000000000000000 ]---
RIP: 0010:metapage_write_folio+0xa8/0xca0 fs/jfs/jfs_metapage.c:428
Code: e8 cd 34 78 fe 4d 8d 74 24 18 4c 89 f0 48 c1 e8 03 42 80 3c 38 00 74 08 4c 89 f7 e8 82 91 dc fe 4d 8b 36 4c 89 f0 48 c1 e8 03 <42> 80 3c 38 00 74 08 4c 89 f7 e8 69 91 dc fe 49 8b 1e 4d 8d 74 24
RSP: 0018:ffffc90002197840 EFLAGS: 00010256
RAX: 0000000000000000 RBX: 1ffffd400023200e RCX: ffff88801e04bd80
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc90002197950 R08: ffffea0001190077 R09: 1ffffd400023200e
R10: dffffc0000000000 R11: fffff9400023200f R12: ffffea0001190040
R13: 0000000000000081 R14: 0000000000000000 R15: dffffc0000000000
FS: 0000000000000000(0000) GS:ffff8880ecaf3000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f1b44e236e0 CR3: 000000000d9be000 CR4: 0000000000750ef0
PKRU: 55555554
----------------
Code disassembly (best guess):
0: e8 cd 34 78 fe call 0xfe7834d2
5: 4d 8d 74 24 18 lea 0x18(%r12),%r14
a: 4c 89 f0 mov %r14,%rax
d: 48 c1 e8 03 shr $0x3,%rax
11: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1)
16: 74 08 je 0x20
18: 4c 89 f7 mov %r14,%rdi
1b: e8 82 91 dc fe call 0xfedc91a2
20: 4d 8b 36 mov (%r14),%r14
23: 4c 89 f0 mov %r14,%rax
26: 48 c1 e8 03 shr $0x3,%rax
* 2a: 42 80 3c 38 00 cmpb $0x0,(%rax,%r15,1) <-- trapping instruction
2f: 74 08 je 0x39
31: 4c 89 f7 mov %r14,%rdi
34: e8 69 91 dc fe call 0xfedc91a2
39: 49 8b 1e mov (%r14),%rbx
3c: 4d rex.WRB
3d: 8d .byte 0x8d
3e: 74 24 je 0x64
The explanation of the root cause of the bug is:
The crash is a NULL pointer dereference in `metapage_write_folio()` caused by `folio->mapping` being `NULL`.
### Root Cause Analysis
In JFS, directory index tables are stored in the directory inode's mapping (`ip->i_mapping`) and accessed via metapages.
When a directory is deleted, the following sequence of events occurs:
1. `jfs_rmdir()` removes the directory entry from its parent. If the directory becomes empty, `dtInitRoot()` is called.
2. `dtInitRoot()` truncates the directory's index table by calling `xtTruncate(tid, ip, 0, COMMIT_PWMAP)` and sets the `COMMIT_Stale` flag on the child directory.
3. The caller (e.g., `jfs_rmdir` or `jfs_unlink`) checks the `COMMIT_Stale` flag and calls `jfs_truncate_nolock(ip, 0)`.
4. `jfs_truncate_nolock()` calls `truncate_setsize(ip, 0)`, which removes all folios of the directory's index table from the page cache. As part of this removal, `folio->mapping` is set to `NULL`.
However, the dirty metapages of the index table (modified during the deletion process) are still in the `tlock` list of the transaction.
When the transaction is eventually committed, `txUnlock()` is called (often asynchronously by the `jfs_lazycommit` kthread).
1. `txUnlock()` calls `put_metapage(mp)`, which in turn calls `release_metapage(mp)`.
2. `release_metapage()` sees that `test_bit(META_dirty, &mp->flag)` is true, and attempts to write the page back by calling `metapage_write_one(folio)`.
3. `metapage_write_one()` calls `metapage_write_folio(folio, &wbc)`.
4. `metapage_write_folio()` executes `struct inode *inode = folio->mapping->host;`. Since `folio->mapping` is `NULL`, this results in a general protection fault / NULL pointer dereference.
### How to Fix
To fix this, `release_metapage()` and `force_metapage()` must check if `folio->mapping` is `NULL` before attempting to write the metapage. If it is `NULL`, it means the folio has been truncated and removed from the page cache. The metapage should not be written back. Instead, we should clear the `META_dirty` and `META_sync` flags, and remove the metapage from the logsync list if necessary.
Clearing the `META_dirty` flag is crucial because `drop_metapage()` checks this flag; if it remains set, `drop_metapage()` will return early, causing a permanent memory leak of the metapage structure.
**1. Fix in `release_metapage` (`fs/jfs/jfs_metapage.c:880`):**
```c
if (folio->mapping) {
if (test_bit(META_dirty, &mp->flag)) {
folio_mark_dirty(folio);
if (test_bit(META_sync, &mp->flag)) {
clear_bit(META_sync, &mp->flag);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
folio_lock(folio);
}
} else if (mp->lsn) /* discard_metapage doesn't remove it */
remove_from_logsync(mp);
} else {
clear_bit(META_dirty, &mp->flag);
clear_bit(META_sync, &mp->flag);
if (mp->lsn)
remove_from_logsync(mp);
}
```
**2. Fix in `force_metapage` (`fs/jfs/jfs_metapage.c:839`):**
```c
folio_lock(folio);
if (folio->mapping) {
folio_mark_dirty(folio);
if (metapage_write_one(folio))
jfs_error(mp->sb, "metapage_write_one() failed\n");
} else {
clear_bit(META_dirty, &mp->flag);
folio_unlock(folio);
}
clear_bit(META_forcewrite, &mp->flag);
```
**Additional Note for architectures with `PAGE_SIZE > 4096`:**
On architectures where `PAGE_SIZE > 4096` (e.g., 64K pages on ARM64/PPC), `insert_metapage()` and `remove_metapage()` also dereference `folio->mapping->host->i_blkbits`. To prevent similar crashes there when `folio->mapping` is `NULL`, they should be changed to use `mp->sb->s_blocksize_bits` instead, which is always safely available and identical to `i_blkbits` in JFS.
The diff of the bug fix is:
diff --git a/fs/jfs/jfs_metapage.c b/fs/jfs/jfs_metapage.c
index 64c6eaa7f..16ed69d10 100644
--- a/fs/jfs/jfs_metapage.c
+++ b/fs/jfs/jfs_metapage.c
@@ -106,7 +106,7 @@ static inline int insert_metapage(struct folio *folio, struct metapage *mp)
}
if (mp) {
- l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
+ l2mp_blocks = L2PSIZE - mp->sb->s_blocksize_bits;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
a->mp_count++;
a->mp[index] = mp;
@@ -118,7 +118,7 @@ static inline int insert_metapage(struct folio *folio, struct metapage *mp)
static inline void remove_metapage(struct folio *folio, struct metapage *mp)
{
struct meta_anchor *a = folio->private;
- int l2mp_blocks = L2PSIZE - folio->mapping->host->i_blkbits;
+ int l2mp_blocks = L2PSIZE - mp->sb->s_blocksize_bits;
int index;
index = (mp->index >> l2mp_blocks) & (MPS_PER_PAGE - 1);
@@ -392,7 +392,8 @@ static void last_write_complete(struct folio *folio, blk_status_t status)
if (status) {
int err = blk_status_to_errno(status);
printk(KERN_ERR "metapage_write_end_io: I/O error\n");
- mapping_set_error(folio->mapping, err);
+ if (folio->mapping)
+ mapping_set_error(folio->mapping, err);
}
for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
@@ -836,9 +837,14 @@ void force_metapage(struct metapage *mp)
clear_bit(META_sync, &mp->flag);
folio_get(folio);
folio_lock(folio);
- folio_mark_dirty(folio);
- if (metapage_write_one(folio))
- jfs_error(mp->sb, "metapage_write_one() failed\n");
+ if (folio->mapping) {
+ folio_mark_dirty(folio);
+ if (metapage_write_one(folio))
+ jfs_error(mp->sb, "metapage_write_one() failed\n");
+ } else {
+ clear_bit(META_dirty, &mp->flag);
+ folio_unlock(folio);
+ }
clear_bit(META_forcewrite, &mp->flag);
folio_put(folio);
}
@@ -877,16 +883,25 @@ void release_metapage(struct metapage * mp)
return;
}
- if (test_bit(META_dirty, &mp->flag)) {
- folio_mark_dirty(folio);
- if (test_bit(META_sync, &mp->flag)) {
- clear_bit(META_sync, &mp->flag);
- if (metapage_write_one(folio))
- jfs_error(mp->sb, "metapage_write_one() failed\n");
- folio_lock(folio);
- }
- } else if (mp->lsn) /* discard_metapage doesn't remove it */
- remove_from_logsync(mp);
+ if (folio->mapping) {
+ if (test_bit(META_dirty, &mp->flag)) {
+ folio_mark_dirty(folio);
+ if (test_bit(META_sync, &mp->flag)) {
+ clear_bit(META_sync, &mp->flag);
+ if (metapage_write_one(folio))
+ jfs_error(
+ mp->sb,
+ "metapage_write_one() failed\n");
+ folio_lock(folio);
+ }
+ } else if (mp->lsn) /* discard_metapage doesn't remove it */
+ remove_from_logsync(mp);
+ } else {
+ clear_bit(META_dirty, &mp->flag);
+ clear_bit(META_sync, &mp->flag);
+ if (mp->lsn)
+ remove_from_logsync(mp);
+ }
/* Try to keep metapages from using up too much memory */
drop_metapage(folio, mp);
Additional description of the patch:
I have analyzed the root cause and implemented a comprehensive fix to address the NULL pointer dereference in `metapage_write_folio()` and prevent similar issues across `fs/jfs/jfs_metapage.c`.
### What I did in the patch and why:
1. **Fixed `release_metapage()` and `force_metapage()`:**
As identified in the root cause analysis, when a directory is deleted, its index table is truncated, which removes folios from the page cache and sets `folio->mapping` to `NULL`. However, dirty metapages can still be lingering in the transaction's `tlock` list. When `txUnlock()` eventually calls `release_metapage()`, it attempts to write the page back via `metapage_write_one()`, leading to a crash.
I added a check for `folio->mapping` in both `release_metapage()` and `force_metapage()`. If the mapping is `NULL`, it means the folio has been truncated. Instead of writing it back, we now clear the `META_dirty` and `META_sync` flags, and remove the metapage from the logsync list if necessary. Clearing `META_dirty` is crucial because `drop_metapage()` checks this flag; if it remains set, `drop_metapage()` returns early, causing a permanent memory leak of the metapage structure.
2. **Fixed `insert_metapage()` and `remove_metapage()` for architectures with `PAGE_SIZE > 4096`:**
On architectures with larger page sizes (e.g., 64K pages on ARM64/PPC), `MPS_PER_PAGE > 1`. In this configuration, `insert_metapage()` and `remove_metapage()` were dereferencing `folio->mapping->host->i_blkbits` to calculate the log2 blocks per metapage. If these functions are called on a truncated folio (where `folio->mapping` is `NULL`), this would result in another NULL pointer dereference.
I changed these functions to use `mp->sb->s_blocksize_bits` instead. The metapage's superblock pointer (`mp->sb`) is always safely available and its blocksize bits are identical to the inode's `i_blkbits` in JFS, making this a safe and robust alternative.
3. **Fixed `last_write_complete()`:**
During my review of other places where `folio->mapping` is dereferenced, I found that `last_write_complete()` (the bio `end_io` callback for metapage writes) calls `mapping_set_error(folio->mapping, err)` if an I/O error occurs. If a folio is truncated while it is under writeback, `folio->mapping` becomes `NULL`. When the writeback completes with an error, this would cause a crash.
I added a simple `if (folio->mapping)` check before calling `mapping_set_error()` to prevent this.
These changes collectively ensure that JFS gracefully handles metapages associated with truncated folios across all architectures and execution paths, preventing both kernel crashes and memory leaks.
Here are summaries of recent commits that touched the same files.
Format the summary line consistently with these, look how prefixes
are specified, letter capitalization, style, etc.
treewide: Replace kmalloc with kmalloc_obj for non-scalar types
treewide: remove MIGRATEPAGE_SUCCESS
jfs: stop using write_cache_pages
jfs: implement migrate_folio for jfs_metapage_aops
jfs: Remove use of folio error flag
jfs: Change metapage->page to metapage->folio
jfs: Convert force_metapage to use a folio
jfs: Convert inc_io to take a folio
jfs: Convert page_to_mp to folio_to_mp
jfs; Convert __invalidate_metapages to use a folio
jfs: Convert dec_io to take a folio
jfs: Convert drop_metapage and remove_metapage to take a folio
jfs; Convert release_metapage to use a folio
jfs: Convert insert_metapage() to take a folio
jfs: Convert __get_metapage to use a folio
jfs: Convert metapage_writepage to metapage_write_folio
jfs: Convert metapage_read_folio to use folio APIs
mm,jfs: move write_one_page/folio_write_one to jfs
jfs: Remove check for PageUptodate
jfs: Convert to release_folio
Reply: jfs: fix NULL pointer dereference in metapage_write_folio When a directory is deleted, its index table is truncated, which removes folios from the page cache and sets folio->mapping to NULL. However, dirty metapages can still be lingering in the transaction's tlock list. When txUnlock() eventually calls release_metapage(), it attempts to write the page back via metapage_write_one(), leading to a NULL pointer dereference in metapage_write_folio(). Fix this by checking if folio->mapping is NULL in release_metapage() and force_metapage(). If it is NULL, it means the folio has been truncated. Instead of writing it back, clear the META_dirty and META_sync flags, and remove the metapage from the logsync list if necessary. Clearing META_dirty is crucial because drop_metapage() checks this flag; if it remains set, drop_metapage() returns early, causing a memory leak. Additionally, fix similar potential NULL pointer dereferences in: - insert_metapage() and remove_metapage(): On architectures with PAGE_SIZE > 4096, these functions dereference folio->mapping->host->i_blkbits. Change them to use mp->sb->s_blocksize_bits instead, which is always safely available. - last_write_complete(): Check folio->mapping before calling mapping_set_error() to prevent a crash if a folio is truncated while under writeback. |
| 380/2 | 2026/03/16 16:50 | llm | description-generator |
0mModel:gemini-3.1-pro-preview Tokens: input: 8582 output: 339 thoughts: 2080 Thoughts: **JFS Commit Description Generation** Okay, so I need to create a commit message for this JFS bug fix. The core of the problem is a NULL pointer dereference in `metapage_write_folio`. The root cause? A `folio->mapping` that's unexpectedly `NULL` because the directory was deleted and its index table truncated. This leaves dirty metapages dangling in the transaction. First, the one-line summary. I'll stick with the pattern of recent commits, "jfs: ...". Let's go with "jfs: fix NULL pointer dereference in metapage_write_folio". That's a good, concise description. Now for the detailed description. We have to clearly explain what the bug is and how it's fixed, all within the 72-character line limit. Here's my draft: When a directory is deleted, its index table is truncated, which removes folios from the page cache and sets folio->mapping to NULL. However, dirty metapages can still be lingering in the transaction's tlock list. When txUnlock() eventually calls release_metapage(), it attempts to write the page back via metapage_write_one(), leading to a NULL pointer dereference in metapage_write_folio(). Fix this by checking if folio->mapping is NULL in release_metapage() and force_metapage(). If it is NULL, it means the folio has been truncated. Instead of writing it back, clear the META_dirty and META_sync flags, and remove the metapage from the logsync list if necessary. Clearing META_dirty is crucial because drop_metapage() checks this flag; if it remains set, drop_metapage() returns early, causing a memory leak. Additionally, fix similar potential NULL pointer dereferences in: - insert_metapage() and remove_metapage(): On architectures with PAGE_SIZE > 4096, these functions dereference folio->mapping->host->i_blkbits. Change them to use mp->sb->s_blocksize_bits instead, which is always safely available. - last_write_complete(): Check folio->mapping before calling mapping_set_error() to prevent a crash if a folio is truncated while under writeback. Okay, let's make a few minor tweaks to improve readability and flow, and keep those lines under 72 chars. It seems good now. The line lengths are all within the requirement. Let's make sure it's accurate and complete. Here is the final description. jfs: fix NULL pointer dereference in metapage_write_folio When a directory is deleted, its index table is truncated, which removes folios from the page cache and sets folio->mapping to NULL. However, dirty metapages can still be lingering in the transaction's tlock list. When txUnlock() eventually calls release_metapage(), it attempts to write the page back via metapage_write_one(), leading to a NULL pointer dereference in metapage_write_folio(). Fix this by checking if folio->mapping is NULL in release_metapage() and force_metapage(). If it is NULL, it means the folio has been truncated. Instead of writing it back, clear the META_dirty and META_sync flags, and remove the metapage from the logsync list if necessary. Clearing META_dirty is crucial because drop_metapage() checks this flag; if it remains set, drop_metapage() returns early, causing a memory leak. Additionally, fix similar potential NULL pointer dereferences in: - insert_metapage() and remove_metapage(): On architectures with PAGE_SIZE > 4096, these functions dereference folio->mapping->host->i_blkbits. Change them to use mp->sb->s_blocksize_bits instead, which is always safely available. - last_write_complete(): Check folio->mapping before calling mapping_set_error() to prevent a crash if a folio is truncated while under writeback. |
| Total Calls | Total Tokens | Avg Tokens | Total Duration (Seconds) | Avg Duration (Seconds) |
|---|
| Total Calls | Total Duration (Seconds) | Avg Duration (Seconds) |
|---|