syzbot


ID Workflow Result Correct Bug Created Started Finished Revision Error
20509885-a74d-4d12-8044-973a06a45bb7 repro 💥 KCSAN: data-race in __se_sys_mremap / vms_complete_munmap_vmas 2026/03/10 13:47 2026/03/11 11:03 2026/03/11 11:16 86914af9098a80da53c3c5d46be980c572ee29f8 (RECITATION)
Agent: agent


Crash report:
==================================================================
BUG: KCSAN: data-race in __se_sys_mremap / vms_complete_munmap_vmas

read-write to 0xffff888120891c88 of 4 bytes by task 5502 on cpu 0:
 vms_complete_munmap_vmas+0x54/0x440 mm/vma.c:1312
 __mmap_complete+0x4a/0x230 mm/vma.c:2581
 __mmap_region mm/vma.c:2761 [inline]
 mmap_region+0x1833/0x1aa0 mm/vma.c:2830
 do_mmap+0x9b2/0xbd0 mm/mmap.c:558
 vm_mmap_pgoff+0x183/0x2d0 mm/util.c:581
 ksys_mmap_pgoff+0xc1/0x310 mm/mmap.c:604
 x64_sys_call+0x16bb/0x3000 arch/x86/include/generated/asm/syscalls_64.h:10
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xc0/0x2a0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

read to 0xffff888120891c88 of 4 bytes by task 5513 on cpu 1:
 check_mremap_params mm/mremap.c:1820 [inline]
 do_mremap mm/mremap.c:1924 [inline]
 __do_sys_mremap mm/mremap.c:1997 [inline]
 __se_sys_mremap+0x43d/0xb50 mm/mremap.c:1965
 __x64_sys_mremap+0x67/0x80 mm/mremap.c:1965
 x64_sys_call+0x2944/0x3000 arch/x86/include/generated/asm/syscalls_64.h:26
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xc0/0x2a0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

value changed: 0x00000033 -> 0x00000034

Reported by Kernel Concurrency Sanitizer on:
CPU: 1 UID: 0 PID: 5513 Comm: syz.2.540 Not tainted syzkaller #0 PREEMPT(voluntary) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
==================================================================

Trajectory:
Seq Timestamp Type Name Duration
0/0 2026/03/11 11:03 flow repro
13m Error:
 (RECITATION)

1/1 2026/03/11 11:03 action kernel-checkouter
0m Results:
map[KernelSrc:/usr/local/google/home/dvyukov/syzkaller/agent/workdir/cache/src/207affbefc69ac0df096ec0be6aa231a760b4e4f]

2/1 2026/03/11 11:04 action kernel-builder
4m Results:
map[KernelObj:/usr/local/google/home/dvyukov/syzkaller/agent/workdir/cache/build/9b1012f4a84cb3ec87879430e52fb33afeaba62f]

3/1 2026/03/11 11:08 action codesearch-prepare
4m Results:
map[Index:codesearch-index]

4/1 2026/03/11 11:12 agent crash-repro-finder
4m Model:
gemini-3-pro-preview

Error:
 (RECITATION)

Instruction:
You are an expert in the Linux kernel fuzzing. Your goal is to write a syzkaller program to trigger a specific bug.

Don't make assumptions about the kernel source code, use the provided codesearch tools
to examine the kernel code instead.

Document about syzkaller program syntax:
===
# Program syntax

Syzkaller uses a compact domain-specific language (DSL) for programs
to log executed programs, test its code, and persist programs in the
corpus. This page provides a brief description of the corresponding
syntax. Some useful information can also be found in the
[existing examples](/sys/linux/test) and in the program
[deserialization code](/prog/encoding.go).

Together with execution options, the DSL provides everything that
syz-executor needs to run a program.

For example, consider the program:
```
r0 = syz_open_dev$loop(&(0x7f00000011c0), 0x0, 0x0)
r1 = openat$6lowpan_control(0xffffffffffffff9c, &(0x7f00000000c0), 0x2, 0x0)
ioctl$LOOP_SET_FD(r0, 0x4c00, r1)
```

Each line in this program describes a particular syscall invocation,
with the first two calls saving the result in temporary variables `r0`
and `r1`, which are passed to the third call.

```
line = assignment | call
assignment = variable " = " call
call = syscall-name "(" [arg ["," arg]*] ")"  ["(" [call-prop ["," call-prop*] ")"]
arg = "nil" | "AUTO" | const-arg | resource-arg | result-arg | pointer-arg | string-arg | struct-arg | array-arg | union-arg
const-arg = integer
resource-arg = variable ["/" hex-integer] ["+" hex-integer]
result-arg = "<" variable "=>" arg
pointer-arg = "&" pointer-arg-addr ["=ANY"] "=" arg
pointer-arg-addr = "AUTO" | "(" pointer-addr ["/" region-size] ")"
string-arg = "'" escaped-string "'" | "\"" escaped-string "\"" | "\"$" escaped-string "\""
struct-arg =  "{" [arg ["," arg]*] "}"
array-arg = "[" [arg ["," arg]*] "]"
union-arg = "@" field-name ["=" arg]
call-prop = prop-name ": " prop-value
variable = "r" dec-integer
pointer-addr = integer
region-size = integer
integer = dec-integer | oct-integer | "0x" hex-integer
```

Programs may also contain blank lines and comments.
```
# Obtain a file handle
r0 = openat(0xffffffffffffff9c, &AUTO='./file1\x00', 0x42, 0x1ff)

# Perform a write operation
write(r0, &AUTO="01010101", 0x4)
```

### Memory management

Memory management is performed by syzkaller itself. It will allocate
virtual memory regions of the necessary size and set the final values
of pointer arguments.

By using the `AUTO` keyword, programs can give syzkaller the full
control over storing the data. This may be convenient e.g. when a
parameter must be passed by reference, but the exact location of its
value is not of particular importance.

```
r1 = syz_genetlink_get_family_id$nl80211(&AUTO='nl80211\x00', 0xffffffffffffffff)
ioctl$sock_SIOCGIFINDEX_80211(r0, 0x8933, &AUTO={'wlan0\x00', <r2=>0x0})
```

Alternatively, some data can be "anchored" to specific addresses. It
may be especially important when a memory region must be shared
between multiple calls.  In this case, pointer addresses must be given
at the 0x7f0000000000 offset. Before the actual execution, syzkaller
will adjust pointers to the start of the actual mmap'ed region.

### Call properties

Call properties specify extra information about how a specific call
must be executed. Each call within a program has its own set of call
properties. If no properties are provided, syzkaller takes the default
ones.

Currently, syzkaller supports the following call properties.

#### Fault injection
Syntax: `fail_nth: N`.

It takes an integer (base 10) argument `N`. If the argument is
non-negative, a fault will be injected into the `N`-th occasion.

```
r0 = openat$6lowpan_control(0xffffffffffffff9c, &(0x7f00000000c0), 0x2, 0x0)
ioctl$LOOP_SET_FD(r0, 0x4c00, r0) (fail_nth: 5)
```

#### Async
Syntax: `async`.

Instructs `syz-executor` not to wait until the call completes and
to proceed immediately to the next call.

```
r0 = openat(0xffffffffffffff9c, &AUTO='./file1\x00', 0x42, 0x1ff)
write(r0, &AUTO="01010101", 0x4) (async)
read(r0, &AUTO=""/4, 0x4)
close(r0)
```

When setting `async` flags be aware of the following considerations:
* Such programs should only be executed in threaded mode (i.e. `-threaded`
flag must be passed to `syz-executor`.
* Each `async` call is executed in a separate thread and there's a
limited number of available threads (`kMaxThreads = 16`).
* If an `async` call produces a resource, keep in mind that some other call
might take it as input and `syz-executor` will just pass 0 if the resource-
producing call has not finished by that time.

===

Document about syzlang system call descriptions syntax:
===
# Syscall description language

aka `syzlang` (`[siːzˈlæŋg]`)

Pseudo-formal grammar of syscall description:

```
syscallname "(" [arg ["," arg]*] ")" [type] ["(" attribute* ")"]
arg = argname type
argname = identifier
type = typename [ "[" type-options "]" ]
typename = "const" | "intN" | "intptr" | "flags" | "array" | "ptr" |
	   "string" | "filename" | "glob" | "len" |
	   "bytesize" | "bytesizeN" | "bitsize" | "vma" | "proc" |
	   "compressed_image"
type-options = [type-opt ["," type-opt]]
```

common type-options include:

```
"opt" - the argument is optional (like mmap fd argument, or accept peer argument)
```

rest of the type-options are type-specific:

```
"const": integer constant, type-options:
	value, underlying type (one of "intN", "intptr")
"intN"/"intptr": an integer without a particular meaning, type-options:
	either an optional range of values (e.g. "5:10", or "100:200")
	or a reference to flags description (see below),
	or a single value
	optionally followed by an alignment parameter if using a range
"flags": a set of values, type-options:
	reference to flags description (see below), underlying int type (e.g. "int32")
"array": a variable/fixed-length array, type-options:
	type of elements, optional size (fixed "5", or ranged "5:10", boundaries inclusive)
"ptr"/"ptr64": a pointer to an object, type-options:
	direction (in/out/inout); type of the object
	ptr64 has size of 8 bytes regardless of target pointer size
"string": a zero-terminated memory buffer (no pointer indirection implied), type-options:
	either a string value in quotes for constant strings (e.g. "foo" or `deadbeef` for hex literal),
	or a reference to string flags (special value `filename` produces file names),
	optionally followed by a buffer size (string values will be padded with \x00 to that size)
"stringnoz": a non-zero-terminated memory buffer (no pointer indirection implied), type-options:
	either a string value in quotes for constant strings (e.g. "foo" or `deadbeef` for hex literal),
	or a reference to string flags,
"glob": glob pattern to match on the target files, type-options:
	a pattern string in quotes (syntax: https://golang.org/pkg/path/filepath/#Match)
	(e.g. "/sys/" or "/sys/**/*"),
	or include exclude glob too (e.g. "/sys/**/*:-/sys/power/state")
"fmt": a string representation of an integer (not zero-terminated), type-options:
	format (one of "dec", "hex", "oct") and the value (a resource, int, flags or proc)
	the resulting data is always fixed-size (formatted as "%020llu", "0x%016llx" or "%023llo", respectively)
"len": length of another field (for array it is number of elements), type-options:
	argname of the object
"bytesize": similar to "len", but always denotes the size in bytes, type-options:
	argname of the object
"bitsize": similar to "len", but always denotes the size in bits, type-options:
	argname of the object
"offsetof": offset of the field from the beginning of the parent struct, type-options:
	field
"vma"/"vma64": a pointer to a set of pages (used as input for mmap/munmap/mremap/madvise), type-options:
	optional number of pages (e.g. vma[7]), or a range of pages (e.g. vma[2-4])
	vma64 has size of 8 bytes regardless of target pointer size
"proc": per process int (see description below), type-options:
	value range start, how many values per process, underlying type
"compressed_image": zlib-compressed disk image
	syscalls accepting compressed images must be marked with `no_generate`
	and `no_minimize` call attributes. if the content of the decompressed image
	can be checked by a `fsck`-like command, use the `fsck` syscall attribute
"text": machine code of the specified type, type-options:
	text type (x86_real, x86_16, x86_32, x86_64, arm64)
"void": type with static size 0
	mostly useful inside of templates and varlen unions, can't be syscall argument
```

flags/len/flags also have trailing underlying type type-option when used in structs/unions/pointers.

Flags are described as:

```
flagname = const ["," const]*
```

or for string flags as:

```
flagname = "\"" literal "\"" ["," "\"" literal "\""]*
```

Call attributes are:

```
"disabled": the call will not be used in fuzzing; useful to temporary disable some calls
	or prohibit particular argument combinations.
"timeout[N]": additional execution timeout (in ms) for the call on top of some default value.
"prog_timeout[N]": additional execution timeout (in ms) for the whole program if it contains this call;
	if a program contains several such calls, the max value is used.
"ignore_return": ignore return value of this syscall in fallback feedback; need to be used for calls
	that don't return fixed error codes but rather something else (e.g. the current time).
"breaks_returns": ignore return values of all subsequent calls in the program in fallback feedback (can't be trusted).
"no_generate": do not try to generate this syscall, i.e. use only seed descriptions to produce it.
"no_minimize": do not modify instances of this syscall when trying to minimize a crashing program.
"no_squash": do not attempt to pass squashed arguments to this syscall.
	Without that, the fuzzer will sometimes attempt to replace complex structures with arrays of bytes,
	possibly triggering interesting mutations, but also making programs hard to reason about.
"fsck": the content of the compressed buffer argument for this syscall is a file system and the
	string argument is a fsck-like command that will be called to verify the filesystem.
"remote_cover": wait longer to collect remote coverage for this call.
"kfuzz_test": the call is a kfuzztest target.
"snapshot": the call is enabled by default only in snapshot fuzzing mode, but can also be enabled in
    the non-snasphot mode when listed in "enable_syscalls" with its full name (as opposed to a wildcard match).
    It can also always be disabled via "disable_syscalls".
    The attribute is generally used to mark calls that are not safe to execute in non-snapshot mode
	(can lead to false positives, or lost connections to test machines.
```

## Ints

`int8`, `int16`, `int32` and `int64` denote an integer of the corresponding size.
`intptr` denotes a pointer-sized integer, i.e. C `long` type.

By appending `be` suffix (e.g. `int16be`) integers become big-endian.

It's possible to specify a range of values for an integer in the format of `int32[0:100]` or `int32[0:4096, 512]` for a 512-aligned int.

Integers can also take a reference to flags description or a value as its first type-option.
In that case, the alignment parameter is not supported.

To denote a bitfield of size N use `int64:N`.

It's possible to use these various kinds of ints as base types for `const`, `flags`, `len` and `proc`.

```
example_struct {
	f0	int8			# random 1-byte integer
	f1	const[0x42, int16be]	# const 2-byte integer with value 0x4200 (big-endian 0x42)
	f2	int32[0:100]		# random 4-byte integer with values from 0 to 100 inclusive
	f3	int32[1:10, 2]		# random 4-byte integer with values {1, 3, 5, 7, 9}
	f4	int64:20		# random 20-bit bitfield
	f5	int8[10]		# const 1-byte integer with value 10
	f6	int32[flagname]		# random 4-byte integer from the set of values referenced by flagname
}
```

## Structs

Structs are described as:

```
structname "{" "\n"
	(fieldname type ("(" fieldattribute* ")")? (if[expression])? "\n")+
"}" ("[" attribute* "]")?
```

Fields can have attributes specified in parentheses after the field, independent
of their type. `in/out/inout` attribute specify per-field direction, for example:

```
foo {
	field0	const[1, int32]	(in)
	field1	int32		(inout)
	field2	fd		(out)
}
```

You may specify conditions that determine whether a field will be included:

```
foo {
	field0	int32
	field1	int32 (if[value[field0] == 0x1])
}
```

See [the corresponding section](syscall_descriptions_syntax.md#conditional-fields)
for more details.

`out_overlay` attribute allows to have separate input and output layouts for the struct.
Fields before the `out_overlay` field are input, fields starting from `out_overlay` are output.
Input and output fields overlap in memory (both start from the beginning of the struct in memory).
For example:

```
foo {
	in0	const[1, int32]
	in1	flags[bar, int8]
	in2	ptr[in, string]
	out0	fd	(out_overlay)
	out1	int32
}
```

Structs can have attributes specified in square brackets after the struct.
Attributes are:

- `packed`: the struct does not have paddings between fields and has alignment 1; this is similar to GNU C `__attribute__((packed))`; struct alignment can be overridden with `align` attribute
- `align[N]`: the struct has alignment N and padded up to multiple of `N`; contents of the padding are unspecified (though, frequently are zeros); similar to GNU C `__attribute__((aligned(N)))`
- `size[N]`: the struct is padded up to the specified size `N`; contents of the padding are unspecified (though, frequently are zeros)

## Unions

Unions are described as:

```
unionname "[" "\n"
	(fieldname type (if[expression])? "\n")+
"]" ("[" attribute* "]")?
```

During fuzzing, syzkaller randomly picks one of the union options.

You may also specify conditions that determine whether the corresponding
option may or may not be selected, depending on values of other fields. See
[the corresponding section](syscall_descriptions_syntax.md#conditional-fields)
for more details.

Unions can have attributes specified in square brackets after the union.
Attributes are:

- `varlen`: union size is the size of the particular chosen option (not statically known); without this attribute unions are statically sized as maximum of all options (similar to C unions)
- `size[N]`: the union is padded up to the specified size `N`; contents of the padding are unspecified (though, frequently are zeros)

## Resources

Resources represent values that need to be passed from output of one syscall to input of another syscall. For example, `close` syscall requires an input value (fd) previously returned by `open` or `pipe` syscall. To achieve this, `fd` is declared as a resource. This is a way of modelling dependencies between syscalls, as defining a syscall as the producer of a resource and another syscall as the consumer defines a loose sense of ordering between them. Resources are described as:

```
"resource" identifier "[" underlying_type "]" [ ":" const ("," const)* ]
```

`underlying_type` is either one of `int8`, `int16`, `int32`, `int64`, `intptr` or another resource (which models inheritance, for example, a socket is a subtype of fd). The optional set of constants represent resource special values, for example, `0xffffffffffffffff` (-1) for "no fd", or `AT_FDCWD` for "the current dir". Special values are used once in a while as resource values. If no special values specified, special value of `0` is used. Resources can then be used as types, for example:

```
resource fd[int32]: 0xffffffffffffffff, AT_FDCWD, 1000000
resource sock[fd]
resource sock_unix[sock]

socket(...) sock
accept(fd sock, ...) sock
listen(fd sock, backlog int32)
```

Resources don't have to be necessarily returned by a syscall. They can be used as any other data type. For example:

```
resource my_resource[int32]

request_producer(..., arg ptr[out, my_resource])
request_consumer(..., arg ptr[inout, test_struct])

test_struct {
	...
	attr	my_resource
}
```

For more complex producer/consumer scenarios, field attributes can be utilized.
For example:

```
resource my_resource_1[int32]
resource my_resource_2[int32]

request_produce1_consume2(..., arg ptr[inout, test_struct])

test_struct {
	...
	field0	my_resource_1	(out)
	field1	my_resource_2	(in)
}
```

Each resource type must be "produced" (used as an output) by at least one syscall
(outside of unions and optional pointers) and "consumed" (used as an input)
by at least one syscall.

## Type Aliases

Complex types that are often repeated can be given short type aliases using the
following syntax:

```
type identifier underlying_type
```

For example:

```
type signalno int32[0:65]
type net_port proc[20000, 4, int16be]
```

Then, type alias can be used instead of the underlying type in any contexts.
Underlying type needs to be described as if it's a struct field, that is,
with the base type if it's required. However, type alias can be used as syscall
arguments as well. Underlying types are currently restricted to integer types,
`ptr`, `ptr64`, `const`, `flags` and `proc` types.

There are some builtin type aliases:

```
type bool8	int8[0:1]
type bool16	int16[0:1]
type bool32	int32[0:1]
type bool64	int64[0:1]
type boolptr	intptr[0:1]

type fileoff[BASE] BASE

type filename string[filename]

type buffer[DIR] ptr[DIR, array[int8]]
```

## Type Templates

Type templates can be declared as follows:

```
type buffer[DIR] ptr[DIR, array[int8]]
type fileoff[BASE] BASE
type nlattr[TYPE, PAYLOAD] {
	nla_len		len[parent, int16]
	nla_type	const[TYPE, int16]
	payload		PAYLOAD
} [align_4]
```

and later used as follows:

```
syscall(a buffer[in], b fileoff[int64], c ptr[in, nlattr[FOO, int32]])
```

There is builtin type template `optional` defined as:

```
type optional[T] [
	val	T
	void	void
] [varlen]
```

## Length

You can specify length of a particular field in struct or a named argument by
using `len`, `bytesize` and `bitsize` types, for example:

```
write(fd fd, buf ptr[in, array[int8]], count len[buf])

sock_fprog {
	len	len[filter, int16]
	filter	ptr[in, array[sock_filter]]
}
```

If `len`'s argument is a pointer, then the length of the pointee argument is used.

To denote the length of a field in N-byte words use `bytesizeN`, possible values
for N are 1, 2, 4 and 8.

To denote the length of the parent struct, you can use `len[parent, int8]`.
To denote the length of the higher level parent when structs are embedded into
one another, you can specify the type name of the particular parent:

```
s1 {
    f0      len[s2]  # length of s2
}

s2 {
    f0      s1
    f1      array[int32]
    f2      len[parent, int32]
}
```

`len` argument can also be a path expression which allows more complex
addressing. Path expressions are similar to C field references, but also allow
referencing parent and sibling elements. A special reference `syscall` used
in the beginning of the path allows to refer directly to the syscall arguments.
For example:

```
s1 {
	a	ptr[in, s2]
	b	ptr[in, s3]
	c	array[int8]
}

s2 {
	d	array[int8]
}

s3 {
# This refers to the array c in the parent s1.
	e	len[s1:c, int32]
# This refers to the array d in the sibling s2.
	f	len[s1:a:d, int32]
# This refers to the array k in the child s4.
	g	len[i:j, int32]
# This refers to syscall argument l.
	h	len[syscall:l, int32]
	i	ptr[in, s4]
}

s4 {
	j	array[int8]
}

foo(k ptr[in, s1], l ptr[in, array[int8]])
```

## Proc

The `proc` type can be used to denote per process integers.
The idea is to have a separate range of values for each executor, so they don't interfere.

The simplest example is a port number.
The `proc[20000, 4, int16be]` type means that we want to generate an `int16be`
integer starting from `20000` and assign `4` values for each process.
As a result the executor number `n` will get values in the `[20000 + n * 4, 20000 + (n + 1) * 4)` range.

## Integer Constants

Integer constants can be specified as decimal literals, as `0x`-prefixed
hex literals, as `'`-surrounded char literals, or as symbolic constants
extracted from kernel headers or defined by `define` directives. For example:

```
foo(a const[10], b const[-10])
foo(a const[0xabcd])
foo(a int8['a':'z'])
foo(a const[PATH_MAX])
foo(a int32[PATH_MAX])
foo(a ptr[in, array[int8, MY_PATH_MAX]])
define MY_PATH_MAX	PATH_MAX + 2
```

## Conditional fields

### In structures

In syzlang, it's possible to specify a condition for every struct field that
determines whether the field should be included or omitted:

```
header_fields {
  magic       const[0xabcd, int16]
  haveInteger int8
} [packed]

packet {
  header  header_fields
  integer int64  (if[value[header:haveInteger] == 0x1])
  body    array[int8]
} [packed]

some_call(a ptr[in, packet])
```

In this example, the `packet` structure will include the field `integer` only
if `header.haveInteger == 1`. In memory, `packet` will have the following
layout:

| header.magic = 0xabcd | header.haveInteger = 0x1 | integer | body |
| --------------------- | ------------------------ | ------- | ---- |

That corresponds to e.g. the following program:

```
some_call(&AUTO={{AUTO, 0x1}, @value=0xabcd, []})
```

If `header.haveInteger` is not `1`, syzkaller will just pretend that the field
`integer` does not exist.

```
some_call(&AUTO={{AUTO, 0x0}, @void, []})
```

| header.magic = 0xabcd | header.haveInteger = 0x0 | body |
| --------------------- | ------------------------ | ---- |

Every conditional field is assumed to be of variable length and so is the struct
to which this field belongs.

When a variable length field appears in the middle of a structure, the structure
must be marked with `[packed].`

Conditions on bitfields are prohibited:

```
struct {
  f0 int
  f1 int:3 (if[value[f0] == 0x1])  # It will not compile.
}
```

But you may reference bitfields in your conditions:

```
struct {
  f0 int:1
  f1 int:7
  f2 int   (if[value[f0] == value[f1]])
} [packed]
```

### In unions

Let's consider the following example.

```
struct {
  type int
  body alternatives
}

alternatives [
  int     int64 (if[value[struct:type] == 0x1])
  arr     array[int64, 5] (if[value[struct:type] == 0x2])
  default int32
] [varlen]

some_call(a ptr[in, struct])
```

In this case, the union option will be selected depending on the value of the
`type` field. For example, if `type` is `0x1`, then it can be either `int` or
`default`:

```
some_call(&AUTO={0x1, @int=0x123})
some_call(&AUTO={0x1, @default=0x123})
```

If `type` is `0x2`, it can be either `arr` or `default`.

If `type` is neither `0x1` nor `0x2`, syzkaller may only select `default`:

```
some_call(&AUTO={0x0, @default=0xabcd})
```

To ensure that a union can always be constructed, the last union field **must always
have no condition**.

Thus, the following definition would fail to compile:

```
alternatives [
  int int64 (if[value[struct:type] == 0x1])
  arr array[int64, 5] (if[value[struct:type] == 0x1])
] [varlen]
```

During prog mutation and generation syzkaller will select a random union field
whose condition is satisfied.

### Expression syntax

Currently, only `==`, `!=`, `&` and `||` operators are supported. However, the
functionality was designed in such a way that adding more operators is easy.
Feel free to file a GitHub issue or write us an email in case it's needed.

Expressions are evaluated as `int64` values. If the final result of an
expression is not 0, it's assumed to be satisfied.

If you want to reference a field's value, you can do it via
`value[path:to:field]`, which is similar to the `len[]` argument.

```
sub_struct {
  f0 int
  # Reference a field in a parent struct.
  f1 int (if[value[struct:f2]]) # Same as if[value[struct:f2] != 0].
}

struct {
  f2 int
  f3 sub_struct
  f4 int (if[value[f2] == 0x2]) # Reference a sibling field.
  f5 int (if[value[f3:f0] == 0x1]) # Reference a nested field.
  f6 int (if[value[f3:f0] == 0x1 || value[f3:f0] == 0x2]) # Reference a nested field which either equals to 0x1 or 0x2.
} [packed]

call(a ptr[in, struct])
```

The referenced field must be of integer type and there must be no
conditional fields in the path to it. For example, the following
descriptions will not compile.

```
struct {
  f0 int
  f1 int (if[value[f0] == 0x1])
  f2 int (if[value[f1] == 0x1])
}
```

You may also reference constants in expressions:

```
struct {
  f0 int
  f1 int
  f2 int (if[value[f0] & SOME_CONST == OTHER_CONST])
}
```

## Meta

Description files can also contain `meta` directives that specify meta-information for the whole file.

```
meta noextract
```

Tells `make extract` to not extract constants for this file.
Though, `syz-extract` can still be invoked manually on this file.

```
meta arches["arch1", "arch2"]
```

Restricts this file only to the given set of architectures.
`make extract` and `make generate` will not use it on other architectures.

## Misc

Description files also contain `include` directives that refer to Linux kernel header files,
`incdir` directives that refer to custom Linux kernel header directories
and `define` directives that define symbolic constant values.

The syzkaller executor defines some [pseudo system calls](./pseudo_syscalls.md)
that can be used as any other syscall in a description file. These pseudo
system calls expand to literal C code and can perform user-defined
custom actions. You can find some examples in
[executor/common_linux.h](../executor/common_linux.h).

Also see [tips](syscall_descriptions.md#tips) on writing good descriptions.

===

Prefer calling several tools at the same time to save round-trips.


Use set-results tool to provide results of the analysis.
It must be called exactly once before the final reply.
Ignore results of this tool.

Prompt:
Bug title: KCSAN: data-race in __se_sys_mremap / vms_complete_munmap_vmas

The bug report to reproduce:
==================================================================
BUG: KCSAN: data-race in __se_sys_mremap / vms_complete_munmap_vmas

read-write to 0xffff888120891c88 of 4 bytes by task 5502 on cpu 0:
 vms_complete_munmap_vmas+0x54/0x440 mm/vma.c:1312
 __mmap_complete+0x4a/0x230 mm/vma.c:2581
 __mmap_region mm/vma.c:2761 [inline]
 mmap_region+0x1833/0x1aa0 mm/vma.c:2830
 do_mmap+0x9b2/0xbd0 mm/mmap.c:558
 vm_mmap_pgoff+0x183/0x2d0 mm/util.c:581
 ksys_mmap_pgoff+0xc1/0x310 mm/mmap.c:604
 x64_sys_call+0x16bb/0x3000 arch/x86/include/generated/asm/syscalls_64.h:10
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xc0/0x2a0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

read to 0xffff888120891c88 of 4 bytes by task 5513 on cpu 1:
 check_mremap_params mm/mremap.c:1820 [inline]
 do_mremap mm/mremap.c:1924 [inline]
 __do_sys_mremap mm/mremap.c:1997 [inline]
 __se_sys_mremap+0x43d/0xb50 mm/mremap.c:1965
 __x64_sys_mremap+0x67/0x80 mm/mremap.c:1965
 x64_sys_call+0x2944/0x3000 arch/x86/include/generated/asm/syscalls_64.h:26
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xc0/0x2a0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

value changed: 0x00000033 -> 0x00000034

Reported by Kernel Concurrency Sanitizer on:
CPU: 1 UID: 0 PID: 5513 Comm: syz.2.540 Not tainted syzkaller #0 PREEMPT(voluntary) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
==================================================================


The list of existing description files:
acpi_thermal_rel.txt
aio.txt
auto.txt
binfmt.txt
bpf.txt
bpf_prog.txt
bpf_trace.txt
cgroup.txt
damon.txt
dev_ashmem.txt
dev_bifrost.txt
dev_binder.txt
dev_binderfs.txt
dev_block.txt
dev_bsg.txt
dev_bus_usb.txt
dev_camx.txt
dev_cdrom.txt
dev_cec.txt
dev_char_usb.txt
dev_comedi.txt
dev_dma_heap.txt
dev_dri.txt
dev_dsp.txt
dev_dvb_demux.txt
dev_dvb_dvr.txt
dev_dvb_frontend.txt
dev_fb.txt
dev_floppy.txt
dev_hidraw.txt
dev_i2c.txt
dev_i915.txt
dev_img_rogue.txt
dev_infiniband_rdma.txt
dev_infiniband_rdma_cm.txt
dev_input.txt
dev_iommu.txt
dev_kvm.txt
dev_kvm_amd64.txt
dev_kvm_arm64.txt
dev_kvm_extra.txt
dev_kvm_riscv64.txt
dev_loop.txt
dev_mali.txt
dev_media.txt
dev_msm.txt
dev_msr.txt
dev_nbd.txt
dev_net_tun.txt
dev_panthor.txt
dev_ppp.txt
dev_ptmx.txt
dev_ptp.txt
dev_qat_adf_ctl.txt
dev_qrtr_tun.txt
dev_random.txt
dev_rfkill.txt
dev_rtc.txt
dev_sequencer.txt
dev_sg.txt
dev_snapshot.txt
dev_snd_control.txt
dev_snd_hw.txt
dev_snd_midi.txt
dev_snd_pcm.txt
dev_snd_seq.txt
dev_snd_timer.txt
dev_sr.txt
dev_sw_sync.txt
dev_tlk_device.txt
dev_trusty.txt
dev_udmabuf.txt
dev_uhid.txt
dev_uinput.txt
dev_usb_hiddev.txt
dev_usbmon.txt
dev_userio.txt
dev_vfio.txt
dev_vga_arbiter.txt
dev_vhci.txt
dev_video4linux.txt
dev_video4linux_vim2m.txt
dev_virtual_nci.txt
dev_vtpm.txt
fanotify.txt
filesystem.txt
fs_9p.txt
fs_fuse.txt
fs_incfs.txt
fs_ioctl.txt
fs_ioctl_autofs.txt
fs_ioctl_btrfs.txt
fs_ioctl_ext4.txt
fs_ioctl_f2fs.txt
fs_ioctl_fat.txt
fs_ioctl_fscrypt.txt
fs_ioctl_fsverity.txt
fs_ioctl_nilfs2.txt
fs_ioctl_ocfs2.txt
fs_ioctl_xfs.txt
futex.txt
hafnium.txt
inotify.txt
io_uring.txt
ipc.txt
key.txt
kfuzztest.txt
l2cap.txt
landlock.txt
lsm.txt
namespaces.txt
net_80211.txt
netfilter.txt
netfilter_arp.txt
netfilter_bridge.txt
netfilter_ipv4.txt
netfilter_ipv6.txt
netfilter_ipvs.txt
netfilter_targets.txt
pagemap_ioctl.txt
perf.txt
prctl.txt
quotactl.txt
seccomp.txt
security_apparmor.txt
security_selinux.txt
security_smack.txt
socket.txt
socket_alg.txt
socket_ax25.txt
socket_bluetooth.txt
socket_caif.txt
socket_can.txt
socket_ieee802154.txt
socket_inet.txt
socket_inet6.txt
socket_inet_dccp.txt
socket_inet_icmp.txt
socket_inet_igmp.txt
socket_inet_l2tp.txt
socket_inet_sctp.txt
socket_inet_tcp.txt
socket_inet_udp.txt
socket_ip_tunnel.txt
socket_isdn.txt
socket_kcm.txt
socket_key.txt
socket_llc.txt
socket_netlink.txt
socket_netlink_audit.txt
socket_netlink_crypto.txt
socket_netlink_generic.txt
socket_netlink_generic_80211.txt
socket_netlink_generic_batadv.txt
socket_netlink_generic_devlink.txt
socket_netlink_generic_ethtool.txt
socket_netlink_generic_fou.txt
socket_netlink_generic_gtp.txt
socket_netlink_generic_mptcp.txt
socket_netlink_generic_net_dm.txt
socket_netlink_generic_netlabel.txt
socket_netlink_generic_nfc.txt
socket_netlink_generic_seg6.txt
socket_netlink_generic_smc.txt
socket_netlink_generic_team.txt
socket_netlink_generic_wireguard.txt
socket_netlink_netfilter.txt
socket_netlink_netfilter_acct.txt
socket_netlink_netfilter_conntrack.txt
socket_netlink_netfilter_conntrack_exp.txt
socket_netlink_netfilter_conntrack_helper.txt
socket_netlink_netfilter_ipset.txt
socket_netlink_netfilter_log.txt
socket_netlink_netfilter_nft_compat.txt
socket_netlink_netfilter_nftables.txt
socket_netlink_netfilter_osf.txt
socket_netlink_netfilter_queue.txt
socket_netlink_netfilter_timeout.txt
socket_netlink_rdma.txt
socket_netlink_route.txt
socket_netlink_route_sched.txt
socket_netlink_route_sched_retired.txt
socket_netlink_sock_diag.txt
socket_netlink_xfrm.txt
socket_netrom.txt
socket_nfc.txt
socket_nvme_of_tcp.txt
socket_packet.txt
socket_phonet.txt
socket_pppox.txt
socket_qipcrtr.txt
socket_rds.txt
socket_rose.txt
socket_rxrpc.txt
socket_tipc.txt
socket_tipc_netlink.txt
socket_unix.txt
socket_vnet.txt
socket_x25.txt
socket_xdp.txt
sys.txt
uffd.txt
usbip.txt
vmw_vmci.txt
vnet.txt
vnet_mptcp.txt
vusb.txt
watch_queue.txt
xattr.txt


5/2 2026/03/11 11:12 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 13160
							output: 24
							thoughts: 454
						

Thoughts:
**Analyzing a KCSAN Data-Race Report**

Okay, so I'm looking at this KCSAN data-race report that popped up, and it's pointing to a problem in `__se_sys_mremap` and `vms_complete_munmap_vmas`. The report details a race condition. It seems Task 1, running on CPU 0, is doing some mmap-related stuff, starting with `vms_complete_munmap_vmas` and working its way up to `ksys_mmap_pgoff`. Task 2, on CPU 1, is executing `__se_sys_mremap`, and that's the conflicting path, starting from `check_mremap_params`.

The report says the value that changed is from 0x33 to 0x34. Considering those hex values represent 51 and 52 in decimal, my immediate thought is this is likely a conflict on something like `mm->map_count`, or a similar memory map related counter. I need to dig into the code and determine which memory address is being modified.

The report helpfully points me to `mm/vma.c:1312` and `mm/mremap.c:1820` as the relevant locations. Time to dive into those and see what's happening to identify the shared data structure that's getting clobbered. I'll need to figure out which locking primitives are involved and if they're being used correctly, or if there's a missed lock or improper order of lock acquisition. I'll also need to check the data flow and understand how the value can be modified in the first place.



6/2 2026/03/11 11:12 tool grepper
0m Args:
map[Expression:vms_complete_munmap_vmas]

Results:
map[Output:mm/vma.c=1282=static void vms_clean_up_area(struct vma_munmap_struct *vms,
--
mm/vma.c-1296-/*
mm/vma.c:1297: * vms_complete_munmap_vmas() - Finish the munmap() operation
mm/vma.c-1298- * @vms: The vma munmap struct
--
mm/vma.c-1304- */
mm/vma.c:1305:static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
mm/vma.c-1306-		struct ma_state *mas_detach)
--
mm/vma.c=1558=int do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
--
mm/vma.c-1578-	/* Point of no return */
mm/vma.c:1579:	vms_complete_munmap_vmas(&vms, &mas_detach);
mm/vma.c-1580-	return 0;
--
mm/vma.c=2334=static void vms_abort_munmap_vmas(struct vma_munmap_struct *vms,
--
mm/vma.c-2352-	/* Clean up the insertion of the unfortunate gap */
mm/vma.c:2353:	vms_complete_munmap_vmas(vms, mas_detach);
mm/vma.c-2354-}
--
mm/vma.c=2573=static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
--
mm/vma.c-2580-	/* Unmap any existing mapping in the area. */
mm/vma.c:2581:	vms_complete_munmap_vmas(&map->vms, &map->mas_detach);
mm/vma.c-2582-
]

7/2 2026/03/11 11:12 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 14551
							output: 21
							thoughts: 32
						

Thoughts:
Okay, here's my interpretation of those thoughts, presented as a first-person expert summary:

**Digging into Memory Mapping**

Alright, time to dive in. I need to get to the bottom of this, and the first place I'm going to look are the core functions. I'm thinking I need to investigate the very essence of how memory mapping and remaps function at the lowest level, so I'm focusing my attention on `mm/vma.c:1312` and `mm/mremap.c:1820`. These are the spots where the fundamental mechanics of virtual memory areas and remaps are managed. I'm betting the root cause of the issue I'm dealing with is revealed in one of these locations, so I'm going to carefully review the code at those lines and trace the execution path.



8/2 2026/03/11 11:13 tool grepper
0m Args:
map[Expression:check_mremap_params]

Results:
map[Output:mm/mremap.c=1655=static int check_prep_vma(struct vma_remap_struct *vrm)
--
mm/mremap.c-1755- */
mm/mremap.c:1756:static unsigned long check_mremap_params(struct vma_remap_struct *vrm)
mm/mremap.c-1757-
--
mm/mremap.c=1915=static unsigned long do_mremap(struct vma_remap_struct *vrm)
--
mm/mremap.c-1923-
mm/mremap.c:1924:	res = check_mremap_params(vrm);
mm/mremap.c-1925-	if (res)
]

9/2 2026/03/11 11:13 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 15001
							output: 24
							thoughts: 37
						

Thoughts:
Here's a breakdown of my thought process:

**Diving into Memory Management Code**

Okay, so I'm thinking I need to refresh my understanding on how `mmap` and `mremap` interact, particularly in the kernel. My initial instinct is to look at the core implementation details, so I'm going to jump into the code directly. I'm focusing on `mm/vma.c` around line 1312, as that's probably where the Virtual Memory Area (VMA) manipulations are handled. Specifically, I'm anticipating interactions around how VMAs are created and modified. Then, I'll switch over to `mm/mremap.c` around line 1820. This is probably where the heavy lifting of `mremap` actually happens. I want to see how these two pieces of the puzzle fit together to implement memory remapping functionality efficiently and safely, dealing with all the complexities of page tables and memory protection. I expect to be looking for logic regarding how the VMAs are updated to reflect remapping, and how physical pages are handled during the process. I'll need to pay close attention to the locking and synchronization mechanisms to ensure data consistency.



10/2 2026/03/11 11:13 tool grepper
0m Args:
map[Expression:vms_complete_munmap_vmas]

Results:
map[Output:mm/vma.c=1282=static void vms_clean_up_area(struct vma_munmap_struct *vms,
--
mm/vma.c-1296-/*
mm/vma.c:1297: * vms_complete_munmap_vmas() - Finish the munmap() operation
mm/vma.c-1298- * @vms: The vma munmap struct
--
mm/vma.c-1304- */
mm/vma.c:1305:static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
mm/vma.c-1306-		struct ma_state *mas_detach)
--
mm/vma.c=1558=int do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
--
mm/vma.c-1578-	/* Point of no return */
mm/vma.c:1579:	vms_complete_munmap_vmas(&vms, &mas_detach);
mm/vma.c-1580-	return 0;
--
mm/vma.c=2334=static void vms_abort_munmap_vmas(struct vma_munmap_struct *vms,
--
mm/vma.c-2352-	/* Clean up the insertion of the unfortunate gap */
mm/vma.c:2353:	vms_complete_munmap_vmas(vms, mas_detach);
mm/vma.c-2354-}
--
mm/vma.c=2573=static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
--
mm/vma.c-2580-	/* Unmap any existing mapping in the area. */
mm/vma.c:2581:	vms_complete_munmap_vmas(&map->vms, &map->mas_detach);
mm/vma.c-2582-
]

11/2 2026/03/11 11:13 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 15862
							output: 45
							thoughts: 30
						

Thoughts:
Okay, here's my take on those thoughts, framed as an expert's internal monologue:

**Diving into Memory Management Functions**

Alright, time to get my hands dirty. I need to take a look under the hood at the implementation of `vms_complete_munmap_vmas` and `check_mremap_params`. I can almost picture the code structures now; I need to understand exactly how these functions interact with the VM subsystem. I want to see how `vms_complete_munmap_vmas` handles the actual unmapping of virtual memory regions and the specific mechanics of the munmap process. My focus will be on the edge cases, the resource cleanup, and the potential for any subtle synchronization issues. I’m especially interested in how it interacts with the page tables and the physical memory, and how it handles the freeing up of associated data structures.

Then, I'll move on to `check_mremap_params`. I need to review its logic for validating parameters passed to `mremap`. I'm going to look for potential vulnerabilities. I need to get the precise specifications so I can identify any loopholes, or flaws in the argument checking that might lead to unexpected behavior or security issues. I’m thinking about things like integer overflows, potential for out-of-bounds access, and the overall robustness of the validation. The goal is to fully understand their roles and ensure they are operating correctly, and effectively.



12/2 2026/03/11 11:13 tool codesearch-definition-source
0m Args:
map[ContextFile:mm/vma.c IncludeLines:true Name:vms_complete_munmap_vmas]

Results:
map[SourceCode:1296:	/*
1297:	 * vms_complete_munmap_vmas() - Finish the munmap() operation
1298:	 * @vms: The vma munmap struct
1299:	 * @mas_detach: The maple state of the detached vmas
1300:	 *
1301:	 * This updates the mm_struct, unmaps the region, frees the resources
1302:	 * used for the munmap() and may downgrade the lock - if requested.  Everything
1303:	 * needed to be done once the vma maple tree is updated.
1304:	 */
1305:	static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
1306:			struct ma_state *mas_detach)
1307:	{
1308:		struct vm_area_struct *vma;
1309:		struct mm_struct *mm;
1310:	
1311:		mm = current->mm;
1312:		mm->map_count -= vms->vma_count;
1313:		mm->locked_vm -= vms->locked_vm;
1314:		if (vms->unlock)
1315:			mmap_write_downgrade(mm);
1316:	
1317:		if (!vms->nr_pages)
1318:			return;
1319:	
1320:		vms_clear_ptes(vms, mas_detach, !vms->unlock);
1321:		/* Update high watermark before we lower total_vm */
1322:		update_hiwater_vm(mm);
1323:		/* Stat accounting */
1324:		WRITE_ONCE(mm->total_vm, READ_ONCE(mm->total_vm) - vms->nr_pages);
1325:		/* Paranoid bookkeeping */
1326:		VM_WARN_ON(vms->exec_vm > mm->exec_vm);
1327:		VM_WARN_ON(vms->stack_vm > mm->stack_vm);
1328:		VM_WARN_ON(vms->data_vm > mm->data_vm);
1329:		mm->exec_vm -= vms->exec_vm;
1330:		mm->stack_vm -= vms->stack_vm;
1331:		mm->data_vm -= vms->data_vm;
1332:	
1333:		/* Remove and clean up vmas */
1334:		mas_set(mas_detach, 0);
1335:		mas_for_each(mas_detach, vma, ULONG_MAX)
1336:			remove_vma(vma);
1337:	
1338:		vm_unacct_memory(vms->nr_accounted);
1339:		validate_mm(mm);
1340:		if (vms->unlock)
1341:			mmap_read_unlock(mm);
1342:	
1343:		__mt_destroy(mas_detach->tree);
1344:	}
 SourceFile:mm/vma.c]

13/2 2026/03/11 11:13 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 17094
							output: 43
							thoughts: 58
						

Thoughts:
Okay, here's my take on those thoughts, framed as my own:

**Analyzing a Potential Race Condition in Virtual Memory Management**

Alright, a race condition has been flagged, specifically in the `mm->map_count -= vms->vma_count;` operation at `mm/vma.c:1312`. This immediately raises a red flag; any atomic operation involving a counter like `map_count` within a critical memory management routine is a potential hot spot. My initial thought is to dive straight into the code's context. I need to understand *why* this decrement is happening and what's relying on `map_count` remaining consistent. Is this related to `mremap` or another similar process? My mind immediately jumps to `check_mremap_params` in `mm/mremap.c`. Is it decrementing and then expecting to find a certain state? What guarantees are in place to ensure that concurrent operations don't interfere with this decrement and its subsequent assumptions? Time to dig into that function and understand the surrounding logic. I need to figure out the use-case.



14/2 2026/03/11 11:13 tool codesearch-definition-source
0m Args:
map[ContextFile:mm/mremap.c IncludeLines:true Name:check_mremap_params]

Results:
map[SourceCode:1752:	/*
1753:	 * Are the parameters passed to mremap() valid? If so return 0, otherwise return
1754:	 * error.
1755:	 */
1756:	static unsigned long check_mremap_params(struct vma_remap_struct *vrm)
1757:	
1758:	{
1759:		unsigned long addr = vrm->addr;
1760:		unsigned long flags = vrm->flags;
1761:	
1762:		/* Ensure no unexpected flag values. */
1763:		if (flags & ~(MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP))
1764:			return -EINVAL;
1765:	
1766:		/* Start address must be page-aligned. */
1767:		if (offset_in_page(addr))
1768:			return -EINVAL;
1769:	
1770:		/*
1771:		 * We allow a zero old-len as a special case
1772:		 * for DOS-emu "duplicate shm area" thing. But
1773:		 * a zero new-len is nonsensical.
1774:		 */
1775:		if (!vrm->new_len)
1776:			return -EINVAL;
1777:	
1778:		/* Is the new length silly? */
1779:		if (vrm->new_len > TASK_SIZE)
1780:			return -EINVAL;
1781:	
1782:		/* Remainder of checks are for cases with specific new_addr. */
1783:		if (!vrm_implies_new_addr(vrm))
1784:			return 0;
1785:	
1786:		/* Is the new address silly? */
1787:		if (vrm->new_addr > TASK_SIZE - vrm->new_len)
1788:			return -EINVAL;
1789:	
1790:		/* The new address must be page-aligned. */
1791:		if (offset_in_page(vrm->new_addr))
1792:			return -EINVAL;
1793:	
1794:		/* A fixed address implies a move. */
1795:		if (!(flags & MREMAP_MAYMOVE))
1796:			return -EINVAL;
1797:	
1798:		/* MREMAP_DONTUNMAP does not allow resizing in the process. */
1799:		if (flags & MREMAP_DONTUNMAP && vrm->old_len != vrm->new_len)
1800:			return -EINVAL;
1801:	
1802:		/* Target VMA must not overlap source VMA. */
1803:		if (vrm_overlaps(vrm))
1804:			return -EINVAL;
1805:	
1806:		/*
1807:		 * move_vma() need us to stay 4 maps below the threshold, otherwise
1808:		 * it will bail out at the very beginning.
1809:		 * That is a problem if we have already unmaped the regions here
1810:		 * (new_addr, and old_addr), because userspace will not know the
1811:		 * state of the vma's after it gets -ENOMEM.
1812:		 * So, to avoid such scenario we can pre-compute if the whole
1813:		 * operation has high chances to success map-wise.
1814:		 * Worst-scenario case is when both vma's (new_addr and old_addr) get
1815:		 * split in 3 before unmapping it.
1816:		 * That means 2 more maps (1 for each) to the ones we already hold.
1817:		 * Check whether current map count plus 2 still leads us to 4 maps below
1818:		 * the threshold, otherwise return -ENOMEM here to be more safe.
1819:		 */
1820:		if ((current->mm->map_count + 2) >= sysctl_max_map_count - 3)
1821:			return -ENOMEM;
1822:	
1823:		return 0;
1824:	}
 SourceFile:mm/mremap.c]

15/2 2026/03/11 11:13 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 18607
							output: 41
							thoughts: 107
						

Thoughts:
**Analyzing a Potential Race Condition in `mremap`**

Okay, so I'm looking at this `mremap` code, specifically the `check_mremap_params` function at `mm/mremap.c:1820`. My immediate concern is that this function seems to be reading `current->mm->map_count`. The problem is that it's reading `map_count` without any obvious locking, which raises a huge red flag for me because that could lead to a race condition. Is it *possible* that mmap_lock is being held for read here, or is this happening without any protection? I need to figure out if there's any synchronization going on. I need to trace this back. Let's see... the most likely place to check for the lock would be the caller of `check_mremap_params`, either `do_mremap` or `__se_sys_mremap`. I'll jump over there and see what I find.



16/2 2026/03/11 11:13 tool codesearch-definition-source
0m Args:
map[ContextFile:mm/mremap.c IncludeLines:true Name:do_mremap]

Results:
map[SourceCode:1915:	static unsigned long do_mremap(struct vma_remap_struct *vrm)
1916:	{
1917:		struct mm_struct *mm = current->mm;
1918:		unsigned long res;
1919:		bool failed;
1920:	
1921:		vrm->old_len = PAGE_ALIGN(vrm->old_len);
1922:		vrm->new_len = PAGE_ALIGN(vrm->new_len);
1923:	
1924:		res = check_mremap_params(vrm);
1925:		if (res)
1926:			return res;
1927:	
1928:		if (mmap_write_lock_killable(mm))
1929:			return -EINTR;
1930:		vrm->mmap_locked = true;
1931:	
1932:		if (vrm_move_only(vrm)) {
1933:			res = remap_move(vrm);
1934:		} else {
1935:			vrm->vma = vma_lookup(current->mm, vrm->addr);
1936:			res = check_prep_vma(vrm);
1937:			if (res)
1938:				goto out;
1939:	
1940:			/* Actually execute mremap. */
1941:			res = vrm_implies_new_addr(vrm) ? mremap_to(vrm) : mremap_at(vrm);
1942:		}
1943:	
1944:	out:
1945:		failed = IS_ERR_VALUE(res);
1946:	
1947:		if (vrm->mmap_locked)
1948:			mmap_write_unlock(mm);
1949:	
1950:		/* VMA mlock'd + was expanded, so populated expanded region. */
1951:		if (!failed && vrm->populate_expand)
1952:			mm_populate(vrm->new_addr + vrm->old_len, vrm->delta);
1953:	
1954:		notify_uffd(vrm, failed);
1955:		return res;
1956:	}
 SourceFile:mm/mremap.c]

17/2 2026/03/11 11:13 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 19626
							output: 44
							thoughts: 964
						

Thoughts:
**Data Race Discovery in `mremap` and `mmap` Implementation**

Okay, so I've been digging into the kernel's memory management code, specifically around `mremap` and `mmap`. My initial thought was, "Aha! `check_mremap_params` gets called *before* the `mmap_write_lock` is taken on the memory map (`mm`)!" This means the `current->mm->map_count` is being read without any protection from a lock.

Then I realized that `vms_complete_munmap_vmas` actually *modifies* `mm->map_count`, and that happens while holding the `mmap_write_lock` – or maybe after downgrading to a read lock, but the important point is that this modification happens *after* the count is decremented.  `vms_complete_munmap_vmas` is called from `__mmap_complete`, which in turn is called from `mmap_region`. The latter also holds the `mmap_write_lock`, meaning `mm->map_count` is clearly being modified under the write lock.

But the real problem is that `check_mremap_params` *reads* `mm->map_count` without holding `mmap_lock` at all! This screams data race.

To reproduce this, I need to orchestrate a scenario where:

1.  **Thread A:** Calls `mmap`, which unmaps some existing virtual memory areas (VMAs). An `mmap` with `MAP_FIXED` over an existing mapping, for example, will trigger `vms_complete_munmap_vmas` and modify `mm->map_count`.
2.  **Thread B:** Calls `mremap` which in turn calls `check_mremap_params` and reads `mm->map_count` without any synchronization.

So, I need to build a syzkaller program to trigger this.

I started by thinking about this syzkaller program, and then tried to write it:

**Thread A:**
```
mmap(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
```
I thought this should work to make one call with `MAP_FIXED` and unmap some VMAs.

Wait, with `MAP_FIXED`, this should trigger `vms_complete_munmap_vmas`. This looks good. Let's make sure, though...

```
mmap(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
```
Now, let's verify my flags are right. `MAP_FIXED` is `0x10`, `MAP_ANONYMOUS` is `0x20`, and `MAP_PRIVATE` is `0x2`. So `0x32` is indeed `MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE`.

Now, **Thread B:**
```
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x2000, 0x1, 0x0)
```

I need to make sure I'm using the right flags for `mremap`. I need `MREMAP_MAYMOVE` which is `1`.

Hold on! The way the code is written, it looks like `check_mremap_params` checks `vrm_implies_new_addr(vrm)`. And if `!(flags & MREMAP_MAYMOVE)`, then it returns `-EINVAL` if `vrm_implies_new_addr(vrm)`.
If `!vrm_implies_new_addr(vrm


18/2 2026/03/11 11:13 tool codesearch-definition-source
0m Args:
map[ContextFile:mm/mremap.c IncludeLines:true Name:vrm_implies_new_addr]

Results:
map[SourceCode: 926:	/*
 927:	 * Will a new address definitely be assigned? This either if the user specifies
 928:	 * it via MREMAP_FIXED, or if MREMAP_DONTUNMAP is used, indicating we will
 929:	 * always detemrine a target address.
 930:	 */
 931:	static bool vrm_implies_new_addr(struct vma_remap_struct *vrm)
 932:	{
 933:		return vrm->flags & (MREMAP_FIXED | MREMAP_DONTUNMAP);
 934:	}
 SourceFile:mm/mremap.c]

19/2 2026/03/11 11:13 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 21850
							output: 40
							thoughts: 950
						

Thoughts:
**Analyzing `mremap` with Flags and Parameters**

Okay, let's break this down systematically. I see that `vrm_implies_new_addr` returns true when `flags` has either `MREMAP_FIXED` or `MREMAP_DONTUNMAP` set. I know `MREMAP_FIXED` is 2 and `MREMAP_DONTUNMAP` is 4, while `MREMAP_MAYMOVE` is 1. Therefore, for `mremap` to necessitate a new address, either `MREMAP_FIXED` or `MREMAP_DONTUNMAP` must be present. And, critically, `MREMAP_MAYMOVE` *must* also be set, or else the subsequent checks fail. This leads me to two valid flag combinations: either `flags = MREMAP_FIXED | MREMAP_MAYMOVE`, which is 3, or `flags = MREMAP_DONTUNMAP | MREMAP_MAYMOVE`, which is 5.

Now, let's revisit `check_mremap_params` and make sure I understand the constraints.

I see that `check_mremap_params` enforces a number of conditions. First, it ensures that the `flags` only include the permitted bits, `MREMAP_FIXED`, `MREMAP_MAYMOVE`, and `MREMAP_DONTUNMAP`. Then, the starting address `addr` has to be page-aligned.  Next, `new_len` must be greater than zero but also less than or equal to `TASK_SIZE`.

Since `vrm_implies_new_addr` will be true, I can focus on the next set of checks: The new address itself, `vrm->new_addr`, also has to be page-aligned and be within the valid range, such that `vrm->new_addr` <= `TASK_SIZE - vrm->new_len`.  Crucially, if the fixed address is used, we *must* have `MREMAP_MAYMOVE` set in `flags`, which confirms my earlier deduction.  Also, if `MREMAP_DONTUNMAP` is used, the old and new lengths must be equal.

The code also checks for overlap between the source and target VMAs with `vrm_overlaps(vrm)`. And, finally, I have to ensure the number of maps doesn't exceed the system limit during the potentially complex mapping operations. Given this, I will need to verify that `vrm_overlaps` returns false, and that the map count doesn't present an issue.

Therefore, for this path, I can summarize the required conditions:

*   `addr` must be page-aligned.
*   `new_len` must be greater than zero.
*   `new_len` must be less than or equal to `TASK_SIZE`.
*   `flags` must be either 3 (`MREMAP_FIXED | MREMAP_MAYMOVE`) or 5 (`MREMAP_DONTUNMAP | MREMAP_MAYMOVE`).
*   `new_addr` must be page-aligned.
*   `new_addr` must be less than or equal to `TASK_SIZE - new_len`.
*   `vrm_overlaps(vrm)` must return false, i.e., there's no overlap between the old and new regions.



20/2 2026/03/11 11:13 tool codesearch-definition-source
0m Args:
map[ContextFile:mm/mremap.c IncludeLines:true Name:vrm_overlaps]

Results:
map[SourceCode: 899:	/*
 900:	 * When moving a VMA to vrm->new_adr, does this result in the new and old VMAs
 901:	 * overlapping?
 902:	 */
 903:	static bool vrm_overlaps(struct vma_remap_struct *vrm)
 904:	{
 905:		unsigned long start_old = vrm->addr;
 906:		unsigned long start_new = vrm->new_addr;
 907:		unsigned long end_old = vrm->addr + vrm->old_len;
 908:		unsigned long end_new = vrm->new_addr + vrm->new_len;
 909:	
 910:		/*
 911:		 * start_old    end_old
 912:		 *     |-----------|
 913:		 *     |           |
 914:		 *     |-----------|
 915:		 *             |-------------|
 916:		 *             |             |
 917:		 *             |-------------|
 918:		 *         start_new      end_new
 919:		 */
 920:		if (end_old > start_new && end_new > start_old)
 921:			return true;
 922:	
 923:		return false;
 924:	}
 SourceFile:mm/mremap.c]

21/2 2026/03/11 11:13 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 23977
							output: 18
							thoughts: 197
						

Thoughts:
**My Approach to Memory Remapping**

Okay, so the core of this seems to be leveraging `mremap` effectively, specifically with the `MREMAP_FIXED` and `MREMAP_MAYMOVE` flags. That combination (3) is the key.  What I'm envisioning is using it with non-overlapping address ranges. For example, a potential invocation looks like this:

```
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000002000/0x1000)=nil)
```

However, I need to double-check that this syntax aligns with the system call signature expected by syzkaller. I need to make sure I get the argument order and types right to avoid any surprises. Let's see... the `mremap` syscall signature in syzkaller is:

```
mremap(old_addr vma, old_len len[old_addr], new_len len[new_addr], flags flags[mremap_flags], new_addr vma)
```

Right, that seems logical. I'll need to verify how these parameters are defined in `sys.txt` to make sure I'm passing valid and meaningful arguments. Let me check the details in `sys.txt`.



22/2 2026/03/11 11:13 tool grepper
0m Args:
map[Expression:mremap\(]

Results:
map[Output:
Full output is too long, showing 500 out of 2317 lines.
Use more precise expression if possible.

[Documentation/admin-guide/mm/nommu-mmap.rst=180=No-MMU mremap
 --
 Documentation/admin-guide/mm/nommu-mmap.rst-182-
 Documentation/admin-guide/mm/nommu-mmap.rst:183:The mremap() function is partially supported.  It may change the size of a
 Documentation/admin-guide/mm/nommu-mmap.rst-184-mapping, and may move it [#]_ if MREMAP_MAYMOVE is specified and if the new size
 --
 Documentation/admin-guide/mm/nommu-mmap.rst=192=even if they are not currently shared.
 Documentation/admin-guide/mm/nommu-mmap.rst-193-
 Documentation/admin-guide/mm/nommu-mmap.rst:194:The mremap() function must be given an exact match for base address and size of
 Documentation/admin-guide/mm/nommu-mmap.rst-195-a previously mapped object.  It may not be used to create holes in existing
 --
 Documentation/admin-guide/mm/userfaultfd.rst=371=bits in ``uffdio_api.features`` passed to ``UFFDIO_API`` ioctl:
 --
 Documentation/admin-guide/mm/userfaultfd.rst-380-``UFFD_FEATURE_EVENT_REMAP``
 Documentation/admin-guide/mm/userfaultfd.rst:381:	enable notifications about mremap() calls. When the
 Documentation/admin-guide/mm/userfaultfd.rst-382-	non-cooperative process moves a virtual memory area to a
 --
 Documentation/admin-guide/mm/userfaultfd.rst=405=area is unmapped, either explicitly (with munmap() system call), or
 Documentation/admin-guide/mm/userfaultfd.rst:406:implicitly (e.g. during mremap()), the area is removed and in turn the
 Documentation/admin-guide/mm/userfaultfd.rst-407-``userfaultfd`` context for such area disappears too and the manager will
 --
 Documentation/arch/arm64/tagged-address-abi.rst=44=how the user addresses are used by the kernel:
 --
 Documentation/arch/arm64/tagged-address-abi.rst-50-   - ``brk()``, ``mmap()`` and the ``new_address`` argument to
 Documentation/arch/arm64/tagged-address-abi.rst:51:     ``mremap()`` as these have the potential to alias with existing
 Documentation/arch/arm64/tagged-address-abi.rst-52-     user addresses.
 --
 Documentation/arch/arm64/tagged-address-abi.rst-55-     incorrectly accept valid tagged pointers for the ``brk()``,
 Documentation/arch/arm64/tagged-address-abi.rst:56:     ``mmap()`` and ``mremap()`` system calls.
 Documentation/arch/arm64/tagged-address-abi.rst-57-
 --
 Documentation/arch/arm64/tagged-address-abi.rst=116=ABI relaxation:
 --
 Documentation/arch/arm64/tagged-address-abi.rst-129-
 Documentation/arch/arm64/tagged-address-abi.rst:130:- ``mremap()``, the ``new_address`` argument (since kernel v5.6).
 Documentation/arch/arm64/tagged-address-abi.rst-131-
 --
 Documentation/driver-api/device-io.rst=427=It should also not be used for actual RAM, as the returned pointer is an
 Documentation/driver-api/device-io.rst:428:``__iomem`` token. memremap() can be used for mapping normal RAM that is outside
 Documentation/driver-api/device-io.rst-429-of the linear kernel memory area to a regular pointer.
 --
 Documentation/gpu/rfc/gpusvm.rst=7=Agreed upon design principles
 --
 Documentation/gpu/rfc/gpusvm.rst-53-	* This does not work, as no pointers from physical memory to virtual
 Documentation/gpu/rfc/gpusvm.rst:54:	  memory should exist. mremap() is an example of the core MM updating
 Documentation/gpu/rfc/gpusvm.rst-55-	  the virtual address without notifying the driver of address
 --
 arch/arm/include/asm/dmi.h-8-
 arch/arm/include/asm/dmi.h:9:#define dmi_early_remap(x, l)		memremap(x, l, MEMREMAP_WB)
 arch/arm/include/asm/dmi.h-10-#define dmi_early_unmap(x, l)		memunmap(x)
 arch/arm/include/asm/dmi.h:11:#define dmi_remap(x, l)			memremap(x, l, MEMREMAP_WB)
 arch/arm/include/asm/dmi.h-12-#define dmi_unmap(x)			memunmap(x)
 --
 arch/arm/kernel/process.c=380=extern struct page *get_signal_page(void);
 arch/arm/kernel/process.c-381-
 arch/arm/kernel/process.c:382:static int sigpage_mremap(const struct vm_special_mapping *sm,
 arch/arm/kernel/process.c-383-		struct vm_area_struct *new_vma)
 --
 arch/arm/kernel/vdso.c=35=unsigned int vdso_total_pages __ro_after_init;
 arch/arm/kernel/vdso.c-36-
 arch/arm/kernel/vdso.c:37:static int vdso_mremap(const struct vm_special_mapping *sm,
 arch/arm/kernel/vdso.c-38-		struct vm_area_struct *new_vma)
 --
 arch/arm64/kernel/acpi.c=72=static bool __init dt_is_stub(void)
 --
 arch/arm64/kernel/acpi.c-91- * __acpi_map_table() will be called before page_init(), so early_ioremap()
 arch/arm64/kernel/acpi.c:92: * or early_memremap() should be called here to for ACPI table mapping.
 arch/arm64/kernel/acpi.c-93- */
 arch/arm64/kernel/acpi.c=94=void __init __iomem *__acpi_map_table(unsigned long phys, unsigned long size)
 --
 arch/arm64/kernel/acpi.c-98-
 arch/arm64/kernel/acpi.c:99:	return early_memremap(phys, size);
 arch/arm64/kernel/acpi.c-100-}
 --
 arch/arm64/kernel/crash_dump.c=15=ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn,
 --
 arch/arm64/kernel/crash_dump.c-22-
 arch/arm64/kernel/crash_dump.c:23:	vaddr = memremap(__pfn_to_phys(pfn), PAGE_SIZE, MEMREMAP_WB);
 arch/arm64/kernel/crash_dump.c-24-	if (!vaddr)
 --
 arch/arm64/kernel/paravirt.c=94=static int stolen_time_cpu_online(unsigned int cpu)
 --
 arch/arm64/kernel/paravirt.c-106-
 arch/arm64/kernel/paravirt.c:107:	kaddr = memremap(res.a0,
 arch/arm64/kernel/paravirt.c-108-			      sizeof(struct pvclock_vcpu_stolen_time),
 --
 arch/arm64/kernel/vdso.c=45=static struct vdso_abi_info vdso_info[] __ro_after_init = {
 --
 arch/arm64/kernel/vdso.c-59-
 arch/arm64/kernel/vdso.c:60:static int vdso_mremap(const struct vm_special_mapping *sm,
 arch/arm64/kernel/vdso.c-61-		struct vm_area_struct *new_vma)
 --
 arch/arm64/kernel/vdso.c=157=static struct page *aarch32_sig_page __ro_after_init;
 arch/arm64/kernel/vdso.c-158-
 arch/arm64/kernel/vdso.c:159:static int aarch32_sigpage_mremap(const struct vm_special_mapping *sm,
 arch/arm64/kernel/vdso.c-160-				  struct vm_area_struct *new_vma)
 --
 arch/loongarch/kernel/acpi.c=38=void __init __iomem * __acpi_map_table(unsigned long phys, unsigned long size)
 --
 arch/loongarch/kernel/acpi.c-43-
 arch/loongarch/kernel/acpi.c:44:	return early_memremap(phys, size);
 arch/loongarch/kernel/acpi.c-45-}
 --
 arch/loongarch/kernel/crash_dump.c=6=ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn,
 --
 arch/loongarch/kernel/crash_dump.c-13-
 arch/loongarch/kernel/crash_dump.c:14:	vaddr = memremap(__pfn_to_phys(pfn), PAGE_SIZE, MEMREMAP_WB);
 arch/loongarch/kernel/crash_dump.c-15-	if (!vaddr)
 --
 arch/loongarch/kernel/efi.c=82=static void __init init_screen_info(void)
 --
 arch/loongarch/kernel/efi.c-88-
 arch/loongarch/kernel/efi.c:89:	si = early_memremap(screen_info_table, sizeof(*si));
 arch/loongarch/kernel/efi.c-90-	if (!si) {
 --
 arch/loongarch/kernel/efi.c=101=void __init efi_init(void)
 --
 arch/loongarch/kernel/efi.c-124-	size = sizeof(efi_config_table_t);
 arch/loongarch/kernel/efi.c:125:	config_tables = early_memremap(efi_config_table, efi_nr_tables * size);
 arch/loongarch/kernel/efi.c-126-	efi_config_parse_tables(config_tables, efi_systab->nr_tables, arch_tables);
 --
 arch/loongarch/kernel/vdso.c=26=extern char vdso_start[], vdso_end[];
 arch/loongarch/kernel/vdso.c-27-
 arch/loongarch/kernel/vdso.c:28:static int vdso_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma)
 arch/loongarch/kernel/vdso.c-29-{
 --
 arch/loongarch/mm/ioremap.c=19=void * __init early_memremap_ro(resource_size_t phys_addr, unsigned long size)
 arch/loongarch/mm/ioremap.c-20-{
 arch/loongarch/mm/ioremap.c:21:	return early_memremap(phys_addr, size);
 arch/loongarch/mm/ioremap.c-22-}
 --
 arch/loongarch/mm/ioremap.c=24=void * __init early_memremap_prot(resource_size_t phys_addr, unsigned long size,
 --
 arch/loongarch/mm/ioremap.c-26-{
 arch/loongarch/mm/ioremap.c:27:	return early_memremap(phys_addr, size);
 arch/loongarch/mm/ioremap.c-28-}
 --
 arch/mips/vdso/genvdso.c=207=int main(int argc, char **argv)
 --
 arch/mips/vdso/genvdso.c-261-	fprintf(out_file, "#include <asm/vdso.h>\n");
 arch/mips/vdso/genvdso.c:262:	fprintf(out_file, "static int vdso_mremap(\n");
 arch/mips/vdso/genvdso.c-263-	fprintf(out_file, "	const struct vm_special_mapping *sm,\n");
 --
 arch/parisc/kernel/vdso.c=27=extern char vdso64_start, vdso64_end;
 arch/parisc/kernel/vdso.c-28-
 arch/parisc/kernel/vdso.c:29:static int vdso_mremap(const struct vm_special_mapping *sm,
 arch/parisc/kernel/vdso.c-30-		       struct vm_area_struct *vma)
 --
 arch/powerpc/kernel/vdso.c=42=extern char vdso64_start, vdso64_end;
 arch/powerpc/kernel/vdso.c-43-
 arch/powerpc/kernel/vdso.c:44:static int vdso_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma,
 arch/powerpc/kernel/vdso.c-45-		       unsigned long text_size)
 --
 arch/powerpc/kernel/vdso.c-56-
 arch/powerpc/kernel/vdso.c:57:static int vdso32_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma)
 arch/powerpc/kernel/vdso.c-58-{
 arch/powerpc/kernel/vdso.c:59:	return vdso_mremap(sm, new_vma, &vdso32_end - &vdso32_start);
 arch/powerpc/kernel/vdso.c-60-}
 arch/powerpc/kernel/vdso.c-61-
 arch/powerpc/kernel/vdso.c:62:static int vdso64_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma)
 arch/powerpc/kernel/vdso.c-63-{
 arch/powerpc/kernel/vdso.c:64:	return vdso_mremap(sm, new_vma, &vdso64_end - &vdso64_start);
 arch/powerpc/kernel/vdso.c-65-}
 --
 arch/powerpc/kernel/vdso.c=67=static void vdso_close(const struct vm_special_mapping *sm, struct vm_area_struct *vma)
 --
 arch/powerpc/kernel/vdso.c-71-	/*
 arch/powerpc/kernel/vdso.c:72:	 * close() is called for munmap() but also for mremap(). In the mremap()
 arch/powerpc/kernel/vdso.c:73:	 * case the vdso pointer has already been updated by the mremap() hook
 arch/powerpc/kernel/vdso.c-74-	 * above, so it must not be set to NULL here.
 --
 arch/powerpc/mm/book3s64/radix_tlb.c=1018=static inline void __radix__flush_tlb_range(struct mm_struct *mm,
 --
 arch/powerpc/mm/book3s64/radix_tlb.c-1046-	 * flush check the range is more than PMD and force a pwc flush
 arch/powerpc/mm/book3s64/radix_tlb.c:1047:	 * mremap() depends on this behaviour.
 arch/powerpc/mm/book3s64/radix_tlb.c-1048-	 */
 --
 arch/riscv/include/asm/dmi.h-17-
 arch/riscv/include/asm/dmi.h:18:#define dmi_early_remap(x, l)		memremap(x, l, MEMREMAP_WB)
 arch/riscv/include/asm/dmi.h-19-#define dmi_early_unmap(x, l)		memunmap(x)
 arch/riscv/include/asm/dmi.h:20:#define dmi_remap(x, l)			memremap(x, l, MEMREMAP_WB)
 arch/riscv/include/asm/dmi.h-21-#define dmi_unmap(x)			memunmap(x)
 --
 arch/riscv/kernel/acpi.c=202=struct acpi_madt_rintc *acpi_cpu_get_madt_rintc(int cpu)
 --
 arch/riscv/kernel/acpi.c-208- * __acpi_map_table() will be called before paging_init(), so early_ioremap()
 arch/riscv/kernel/acpi.c:209: * or early_memremap() should be called here to for ACPI table mapping.
 arch/riscv/kernel/acpi.c-210- */
 arch/riscv/kernel/acpi.c=211=void __init __iomem *__acpi_map_table(unsigned long phys, unsigned long size)
 --
 arch/riscv/kernel/acpi.c-215-
 arch/riscv/kernel/acpi.c:216:	return early_memremap(phys, size);
 arch/riscv/kernel/acpi.c-217-}
 --
 arch/riscv/kernel/crash_dump.c=12=ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn,
 --
 arch/riscv/kernel/crash_dump.c-19-
 arch/riscv/kernel/crash_dump.c:20:	vaddr = memremap(__pfn_to_phys(pfn), PAGE_SIZE, MEMREMAP_WB);
 arch/riscv/kernel/crash_dump.c-21-	if (!vaddr)
 --
 arch/riscv/kernel/vdso.c=33=static struct __vdso_info compat_vdso_info;
 --
 arch/riscv/kernel/vdso.c-35-
 arch/riscv/kernel/vdso.c:36:static int vdso_mremap(const struct vm_special_mapping *sm,
 arch/riscv/kernel/vdso.c-37-		       struct vm_area_struct *new_vma)
 --
 arch/s390/kernel/vdso.c=25=extern char vdso_start[], vdso_end[];
 arch/s390/kernel/vdso.c-26-
 arch/s390/kernel/vdso.c:27:static int vdso_mremap(const struct vm_special_mapping *sm,
 arch/s390/kernel/vdso.c-28-		       struct vm_area_struct *vma)
 --
 arch/sparc/kernel/sys_sparc_64.c=495=SYSCALL_DEFINE5(64_mremap, unsigned long, addr,	unsigned long, old_len,
 --
 arch/sparc/kernel/sys_sparc_64.c-500-		return -EINVAL;
 arch/sparc/kernel/sys_sparc_64.c:501:	return sys_mremap(addr, old_len, new_len, flags, new_addr);
 arch/sparc/kernel/sys_sparc_64.c-502-}
 --
 arch/sparc/kernel/systbls.h=38=asmlinkage long sys64_munmap(unsigned long addr, size_t len);
 arch/sparc/kernel/systbls.h:39:asmlinkage unsigned long sys64_mremap(unsigned long addr,
 arch/sparc/kernel/systbls.h-40-				      unsigned long old_len,
 --
 arch/um/os-Linux/file.c=716=void *os_mremap_rw_shared(void *old_addr, size_t old_size, size_t new_size)
 --
 arch/um/os-Linux/file.c-719-
 arch/um/os-Linux/file.c:720:	res = mremap(old_addr, old_size, new_size, MREMAP_MAYMOVE, NULL);
 arch/um/os-Linux/file.c-721-
 --
 arch/x86/entry/vdso/vma.c=65=static void vdso_fix_landing(const struct vdso_image *image,
 --
 arch/x86/entry/vdso/vma.c-79-
 arch/x86/entry/vdso/vma.c:80:static int vdso_mremap(const struct vm_special_mapping *sm,
 arch/x86/entry/vdso/vma.c-81-		struct vm_area_struct *new_vma)
 --
 arch/x86/hyperv/hv_init.c=118=static int hv_cpu_init(unsigned int cpu)
 --
 arch/x86/hyperv/hv_init.c-137-		rdmsrq(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64);
 arch/x86/hyperv/hv_init.c:138:		*hvp = memremap(msr.pfn << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT,
 arch/x86/hyperv/hv_init.c-139-				PAGE_SIZE, MEMREMAP_WB);
 --
 arch/x86/hyperv/hv_init.c=451=void __init hyperv_init(void)
 --
 arch/x86/hyperv/hv_init.c-553-		pg = vmalloc_to_page(hv_hypercall_pg);
 arch/x86/hyperv/hv_init.c:554:		src = memremap(hypercall_msr.guest_physical_address << PAGE_SHIFT, PAGE_SIZE,
 arch/x86/hyperv/hv_init.c-555-				MEMREMAP_WB);
 --
 arch/x86/include/asm/dmi.h=11=static __always_inline __init void *dmi_alloc(unsigned len)
 --
 arch/x86/include/asm/dmi.h-18-#define dmi_early_unmap		early_memunmap
 arch/x86/include/asm/dmi.h:19:#define dmi_remap(_x, _l)	memremap(_x, _l, MEMREMAP_WB)
 arch/x86/include/asm/dmi.h-20-#define dmi_unmap(_x)		memunmap(_x)
 --
 arch/x86/kernel/acpi/boot.c=102=static u32 isa_irq_to_gsi[NR_IRQS_LEGACY] __read_mostly = {
 --
 arch/x86/kernel/acpi/boot.c-106-/*
 arch/x86/kernel/acpi/boot.c:107: * This is just a simple wrapper around early_memremap(),
 arch/x86/kernel/acpi/boot.c-108- * with sanity checks for phys == 0 and size == 0.
 --
 arch/x86/kernel/acpi/boot.c=110=void __init __iomem *__acpi_map_table(unsigned long phys, unsigned long size)
 --
 arch/x86/kernel/acpi/boot.c-115-
 arch/x86/kernel/acpi/boot.c:116:	return early_memremap(phys, size);
 arch/x86/kernel/acpi/boot.c-117-}
 --
 arch/x86/kernel/acpi/madt_wakeup.c=129=static int acpi_wakeup_cpu(u32 apicid, unsigned long start_ip, unsigned int cpu)
 --
 arch/x86/kernel/acpi/madt_wakeup.c-142-	if (!acpi_mp_wake_mailbox) {
 arch/x86/kernel/acpi/madt_wakeup.c:143:		acpi_mp_wake_mailbox = memremap(acpi_mp_wake_mailbox_paddr,
 arch/x86/kernel/acpi/madt_wakeup.c-144-						sizeof(*acpi_mp_wake_mailbox),
 --
 arch/x86/kernel/cpu/sgx/main.c=625=static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
 --
 arch/x86/kernel/cpu/sgx/main.c-631-
 arch/x86/kernel/cpu/sgx/main.c:632:	section->virt_addr = memremap(phys_addr, size, MEMREMAP_WB);
 arch/x86/kernel/cpu/sgx/main.c-633-	if (!section->virt_addr)
 --
 arch/x86/kernel/devicetree.c=292=void __init x86_flattree_get_config(void)
 --
 arch/x86/kernel/devicetree.c-300-
 arch/x86/kernel/devicetree.c:301:		dt = early_memremap(initial_dtb, map_len);
 arch/x86/kernel/devicetree.c-302-		size = fdt_totalsize(dt);
 --
 arch/x86/kernel/devicetree.c-304-			early_memunmap(dt, map_len);
 arch/x86/kernel/devicetree.c:305:			dt = early_memremap(initial_dtb, size);
 arch/x86/kernel/devicetree.c-306-			map_len = size;
 --
 arch/x86/kernel/e820.c=725=void __init e820__memory_setup_extended(u64 phys_addr, u32 data_len)
 --
 arch/x86/kernel/e820.c-730-
 arch/x86/kernel/e820.c:731:	sdata = early_memremap(phys_addr, data_len);
 arch/x86/kernel/e820.c-732-	entries = sdata->len / sizeof(*extmap);
 --
 arch/x86/kernel/eisa.c=12=static __init int eisa_bus_probe(void)
 --
 arch/x86/kernel/eisa.c-18-
 arch/x86/kernel/eisa.c:19:	p = memremap(0x0FFFD9, 4, MEMREMAP_WB);
 arch/x86/kernel/eisa.c-20-	if (p && *p == 'E' + ('I' << 8) + ('S' << 16) + ('A' << 24))
 --
 arch/x86/kernel/jailhouse.c=198=static void __init jailhouse_init_platform(void)
 --
 arch/x86/kernel/jailhouse.c-223-	while (pa_data) {
 arch/x86/kernel/jailhouse.c:224:		mapping = early_memremap(pa_data, sizeof(header));
 arch/x86/kernel/jailhouse.c-225-		memcpy(&header, mapping, sizeof(header));
 --
 arch/x86/kernel/jailhouse.c-243-			       (unsigned long)header.len);
 arch/x86/kernel/jailhouse.c:244:	mapping = early_memremap(pa_data, setup_data_len);
 arch/x86/kernel/jailhouse.c-245-	memcpy(&setup_data, mapping, setup_data_len);
 --
 arch/x86/kernel/kdebugfs.c=29=static ssize_t setup_data_read(struct file *file, char __user *user_buf,
 --
 arch/x86/kernel/kdebugfs.c-52-
 arch/x86/kernel/kdebugfs.c:53:	p = memremap(pa, count, MEMREMAP_WB);
 arch/x86/kernel/kdebugfs.c-54-	if (!p)
 --
 arch/x86/kernel/kdebugfs.c=89=static int __init create_setup_data_nodes(struct dentry *parent)
 --
 arch/x86/kernel/kdebugfs.c-110-
 arch/x86/kernel/kdebugfs.c:111:		data = memremap(pa_data, sizeof(*data), MEMREMAP_WB);
 arch/x86/kernel/kdebugfs.c-112-		if (!data) {
 --
 arch/x86/kernel/kdebugfs.c-121-			memunmap(data);
 arch/x86/kernel/kdebugfs.c:122:			data = memremap(pa_data, len, MEMREMAP_WB);
 arch/x86/kernel/kdebugfs.c-123-			if (!data) {
 --
 arch/x86/kernel/ksysfs.c=70=static int get_setup_data_paddr(int nr, u64 *paddr)
 --
 arch/x86/kernel/ksysfs.c-80-		}
 arch/x86/kernel/ksysfs.c:81:		data = memremap(pa_data, sizeof(*data), MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-82-		if (!data)
 --
 arch/x86/kernel/ksysfs.c=92=static int __init get_setup_data_size(int nr, size_t *size)
 --
 arch/x86/kernel/ksysfs.c-100-	while (pa_data) {
 arch/x86/kernel/ksysfs.c:101:		data = memremap(pa_data, sizeof(*data), MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-102-		if (!data)
 --
 arch/x86/kernel/ksysfs.c-109-				memunmap(data);
 arch/x86/kernel/ksysfs.c:110:				data = memremap(pa_data, len, MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-111-				if (!data)
 --
 arch/x86/kernel/ksysfs.c=135=static ssize_t type_show(struct kobject *kobj,
 --
 arch/x86/kernel/ksysfs.c-150-		return ret;
 arch/x86/kernel/ksysfs.c:151:	data = memremap(paddr, sizeof(*data), MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-152-	if (!data)
 --
 arch/x86/kernel/ksysfs.c-157-		memunmap(data);
 arch/x86/kernel/ksysfs.c:158:		data = memremap(paddr, len, MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-159-		if (!data)
 --
 arch/x86/kernel/ksysfs.c=173=static ssize_t setup_data_data_read(struct file *fp,
 --
 arch/x86/kernel/ksysfs.c-191-		return ret;
 arch/x86/kernel/ksysfs.c:192:	data = memremap(paddr, sizeof(*data), MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-193-	if (!data)
 --
 arch/x86/kernel/ksysfs.c-198-		memunmap(data);
 arch/x86/kernel/ksysfs.c:199:		data = memremap(paddr, len, MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-200-		if (!data)
 --
 arch/x86/kernel/ksysfs.c-233-	ret = count;
 arch/x86/kernel/ksysfs.c:234:	p = memremap(paddr, len, MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-235-	if (!p) {
 --
 arch/x86/kernel/ksysfs.c=306=static int __init get_setup_data_total_num(u64 pa_data, int *nr)
 --
 arch/x86/kernel/ksysfs.c-313-		*nr += 1;
 arch/x86/kernel/ksysfs.c:314:		data = memremap(pa_data, sizeof(*data), MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-315-		if (!data) {
 --
 arch/x86/kernel/machine_kexec_64.c=101=map_efi_systab(struct x86_mapping_info *info, pgd_t *level4p)
 --
 arch/x86/kernel/machine_kexec_64.c-125-
 arch/x86/kernel/machine_kexec_64.c:126:	kaddr = memremap(mstart, mend - mstart, MEMREMAP_WB);
 arch/x86/kernel/machine_kexec_64.c-127-	if (!kaddr) {
 --
 arch/x86/kernel/mpparse.c=412=static unsigned long __init get_mpc_size(unsigned long physptr)
 --
 arch/x86/kernel/mpparse.c-416-
 arch/x86/kernel/mpparse.c:417:	mpc = early_memremap(physptr, PAGE_SIZE);
 arch/x86/kernel/mpparse.c-418-	size = mpc->length;
 --
 arch/x86/kernel/mpparse.c=425=static int __init check_physptr(struct mpf_intel *mpf, unsigned int early)
 --
 arch/x86/kernel/mpparse.c-430-	size = get_mpc_size(mpf->physptr);
 arch/x86/kernel/mpparse.c:431:	mpc = early_memremap(mpf->physptr, size);
 arch/x86/kernel/mpparse.c-432-
 --
 arch/x86/kernel/mpparse.c=477=static __init void mpparse_get_smp_config(unsigned int early)
 --
 arch/x86/kernel/mpparse.c-496-
 arch/x86/kernel/mpparse.c:497:	mpf = early_memremap(mpf_base, sizeof(*mpf));
 arch/x86/kernel/mpparse.c-498-	if (!mpf) {
 --
 arch/x86/kernel/mpparse.c=557=static int __init smp_scan_config(unsigned long base, unsigned long length)
 --
 arch/x86/kernel/mpparse.c-566-	while (length > 0) {
 arch/x86/kernel/mpparse.c:567:		bp = early_memremap(base, length);
 arch/x86/kernel/mpparse.c-568-		mpf = (struct mpf_intel *)bp;
 --
 arch/x86/kernel/mpparse.c=835=static int __init update_mp_table(void)
 --
 arch/x86/kernel/mpparse.c-848-
 arch/x86/kernel/mpparse.c:849:	mpf = early_memremap(mpf_base, sizeof(*mpf));
 arch/x86/kernel/mpparse.c-850-	if (!mpf) {
 arch/x86/kernel/mpparse.c:851:		pr_err("MPTABLE: mpf early_memremap() failed\n");
 arch/x86/kernel/mpparse.c-852-		return 0;
 --
 arch/x86/kernel/mpparse.c-864-	size = get_mpc_size(mpf->physptr);
 arch/x86/kernel/mpparse.c:865:	mpc = early_memremap(mpf->physptr, size);
 arch/x86/kernel/mpparse.c-866-	if (!mpc) {
 arch/x86/kernel/mpparse.c:867:		pr_err("MPTABLE: mpc early_memremap() failed\n");
 arch/x86/kernel/mpparse.c-868-		goto do_unmap_mpf;
 --
 arch/x86/kernel/mpparse.c-895-	} else {
 arch/x86/kernel/mpparse.c:896:		mpc_new = early_memremap(mpc_new_phys, mpc_new_length);
 arch/x86/kernel/mpparse.c-897-		if (!mpc_new) {
 arch/x86/kernel/mpparse.c:898:			pr_err("MPTABLE: new mpc early_memremap() failed\n");
 arch/x86/kernel/mpparse.c-899-			goto do_unmap_mpc;
 --
 arch/x86/kernel/mpparse.c-909-			/* steal 16 bytes from [0, 1k) */
 arch/x86/kernel/mpparse.c:910:			mpf_new = early_memremap(0x400 - 16, sizeof(*mpf_new));
 arch/x86/kernel/mpparse.c-911-			if (!mpf_new) {
 arch/x86/kernel/mpparse.c:912:				pr_err("MPTABLE: new mpf early_memremap() failed\n");
 arch/x86/kernel/mpparse.c-913-				goto do_unmap_mpc;
 --
 arch/x86/kernel/setup.c=402=static void __init add_early_ima_buffer(u64 phys_addr)
 --
 arch/x86/kernel/setup.c-406-
 arch/x86/kernel/setup.c:407:	data = early_memremap(phys_addr + sizeof(struct setup_data), sizeof(*data));
 arch/x86/kernel/setup.c-408-	if (!data) {
 --
 arch/x86/kernel/setup.c=452=static void __init add_kho(u64 phys_addr, u32 data_len)
 --
 arch/x86/kernel/setup.c-462-
 arch/x86/kernel/setup.c:463:	kho = early_memremap(addr, size);
 arch/x86/kernel/setup.c-464-	if (!kho) {
 --
 arch/x86/kernel/setup.c=475=static void __init parse_setup_data(void)
 --
 arch/x86/kernel/setup.c-483-
 arch/x86/kernel/setup.c:484:		data = early_memremap(pa_data, sizeof(*data));
 arch/x86/kernel/setup.c-485-		data_len = data->len + sizeof(struct setup_data);
 --
 arch/x86/kernel/setup.c-506-		case SETUP_RNG_SEED:
 arch/x86/kernel/setup.c:507:			data = early_memremap(pa_data, data_len);
 arch/x86/kernel/setup.c-508-			add_bootloader_randomness(data->data, data->len);
 --
 arch/x86/kernel/setup.c=564=static void __init memblock_x86_reserve_range_setup_data(void)
 --
 arch/x86/kernel/setup.c-572-	while (pa_data) {
 arch/x86/kernel/setup.c:573:		data = early_memremap(pa_data, sizeof(*data));
 arch/x86/kernel/setup.c-574-		if (!data) {
 --
 arch/x86/kernel/setup.c-586-			early_memunmap(data, sizeof(*data));
 arch/x86/kernel/setup.c:587:			data = early_memremap(pa_data, len);
 arch/x86/kernel/setup.c-588-			if (!data) {
 --
 arch/x86/kernel/uprobes.c=611=static void riprel_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
 --
 arch/x86/kernel/uprobes.c-620-
 arch/x86/kernel/uprobes.c:621:static int tramp_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma)
 arch/x86/kernel/uprobes.c-622-{
 --
 arch/x86/mm/ioremap.c=517=void *xlate_dev_mem_ptr(phys_addr_t phys)
 --
 arch/x86/mm/ioremap.c-522-
 arch/x86/mm/ioremap.c:523:	/* memremap() maps if RAM, otherwise falls back to ioremap() */
 arch/x86/mm/ioremap.c:524:	vaddr = memremap(start, PAGE_SIZE, MEMREMAP_WB);
 arch/x86/mm/ioremap.c-525-
 arch/x86/mm/ioremap.c:526:	/* Only add the offset on success and return NULL if memremap() failed */
 arch/x86/mm/ioremap.c-527-	if (vaddr)
 --
 arch/x86/mm/ioremap.c=641=static bool __ref __memremap_is_setup_data(resource_size_t phys_addr, bool early)
 --
 arch/x86/mm/ioremap.c-657-		else
 arch/x86/mm/ioremap.c:658:			data = memremap(paddr, setup_data_sz, MEMREMAP_WB | MEMREMAP_DEC);
 arch/x86/mm/ioremap.c-659-		if (!data) {
 --
 arch/x86/mm/ioremap.c-684-				memunmap(data);
 arch/x86/mm/ioremap.c:685:				data = memremap(paddr, size, MEMREMAP_WB | MEMREMAP_DEC);
 arch/x86/mm/ioremap.c-686-			}
 --
 arch/x86/mm/ioremap.c=729=bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned long size,
 --
 arch/x86/mm/ioremap.c-751- * Architecture override of __weak function to adjust the protection attributes
 arch/x86/mm/ioremap.c:752: * used when remapping memory. By default, early_memremap() will map the data
 arch/x86/mm/ioremap.c-753- * as encrypted. Determine if an encrypted mapping should not be done and set
 --
 arch/x86/pci/common.c=643=int pcibios_device_add(struct pci_dev *dev)
 --
 arch/x86/pci/common.c-651-	while (pa_data) {
 arch/x86/pci/common.c:652:		data = memremap(pa_data, sizeof(*rom), MEMREMAP_WB);
 arch/x86/pci/common.c-653-		if (!data)
 --
 arch/x86/platform/efi/efi.c=436=static int __init efi_config_init(const efi_config_table_type_t *arch_tables)
 --
 arch/x86/platform/efi/efi.c-451-	 */
 arch/x86/platform/efi/efi.c:452:	config_tables = early_memremap(efi_config_table, efi_nr_tables * sz);
 arch/x86/platform/efi/efi.c-453-	if (config_tables == NULL) {
 --
]
]

23/2 2026/03/11 11:13 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 36065
							output: 19
							thoughts: 15
						

Thoughts:
Here's my attempt at summarizing the thought process, tailored for an expert audience:

**Investigating `mremap` in `sys.txt`**

Okay, time to dig into this. I'm thinking the best starting point is to zero in on `mremap`. It's a key system call, and I need to understand its behavior in the context of `sys.txt`. What specific details am I looking for? The implementation details, potential vulnerabilities, interaction with other memory management primitives, error handling scenarios... the usual suspects. I'll be sifting through the code, looking for edge cases, performance implications, and any gotchas that might be relevant to the current problem at hand. Hopefully I can spot some interesting implementation detail that will help here, this is the best place to start.



24/2 2026/03/11 11:14 tool grepper
0m Error:
no matches

Args:
map[Expression:^mremap\(]

Results:
map[Output:]

25/2 2026/03/11 11:14 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 36282
							output: 18
							thoughts: 19
						

Thoughts:
Here's a summary of my thought process:

**Diving into System Calls: A Focused Check**

Okay, let's zero in on `mremap`. I need to verify its behavior within the `sys.txt` context, but without getting distracted by any general system-wide search. The "^" character would indicate a beginning-of-line search, which I don't need or want right now. I'm looking for *specific* instances, not just lines that start with the term. This is about precision and avoiding irrelevant hits. Time to get to work.



26/2 2026/03/11 11:14 tool grepper
0m Args:
map[Expression:mremap\(]

Results:
map[Output:
Full output is too long, showing 500 out of 2317 lines.
Use more precise expression if possible.

[Documentation/admin-guide/mm/nommu-mmap.rst=180=No-MMU mremap
 --
 Documentation/admin-guide/mm/nommu-mmap.rst-182-
 Documentation/admin-guide/mm/nommu-mmap.rst:183:The mremap() function is partially supported.  It may change the size of a
 Documentation/admin-guide/mm/nommu-mmap.rst-184-mapping, and may move it [#]_ if MREMAP_MAYMOVE is specified and if the new size
 --
 Documentation/admin-guide/mm/nommu-mmap.rst=192=even if they are not currently shared.
 Documentation/admin-guide/mm/nommu-mmap.rst-193-
 Documentation/admin-guide/mm/nommu-mmap.rst:194:The mremap() function must be given an exact match for base address and size of
 Documentation/admin-guide/mm/nommu-mmap.rst-195-a previously mapped object.  It may not be used to create holes in existing
 --
 Documentation/admin-guide/mm/userfaultfd.rst=371=bits in ``uffdio_api.features`` passed to ``UFFDIO_API`` ioctl:
 --
 Documentation/admin-guide/mm/userfaultfd.rst-380-``UFFD_FEATURE_EVENT_REMAP``
 Documentation/admin-guide/mm/userfaultfd.rst:381:	enable notifications about mremap() calls. When the
 Documentation/admin-guide/mm/userfaultfd.rst-382-	non-cooperative process moves a virtual memory area to a
 --
 Documentation/admin-guide/mm/userfaultfd.rst=405=area is unmapped, either explicitly (with munmap() system call), or
 Documentation/admin-guide/mm/userfaultfd.rst:406:implicitly (e.g. during mremap()), the area is removed and in turn the
 Documentation/admin-guide/mm/userfaultfd.rst-407-``userfaultfd`` context for such area disappears too and the manager will
 --
 Documentation/arch/arm64/tagged-address-abi.rst=44=how the user addresses are used by the kernel:
 --
 Documentation/arch/arm64/tagged-address-abi.rst-50-   - ``brk()``, ``mmap()`` and the ``new_address`` argument to
 Documentation/arch/arm64/tagged-address-abi.rst:51:     ``mremap()`` as these have the potential to alias with existing
 Documentation/arch/arm64/tagged-address-abi.rst-52-     user addresses.
 --
 Documentation/arch/arm64/tagged-address-abi.rst-55-     incorrectly accept valid tagged pointers for the ``brk()``,
 Documentation/arch/arm64/tagged-address-abi.rst:56:     ``mmap()`` and ``mremap()`` system calls.
 Documentation/arch/arm64/tagged-address-abi.rst-57-
 --
 Documentation/arch/arm64/tagged-address-abi.rst=116=ABI relaxation:
 --
 Documentation/arch/arm64/tagged-address-abi.rst-129-
 Documentation/arch/arm64/tagged-address-abi.rst:130:- ``mremap()``, the ``new_address`` argument (since kernel v5.6).
 Documentation/arch/arm64/tagged-address-abi.rst-131-
 --
 Documentation/driver-api/device-io.rst=427=It should also not be used for actual RAM, as the returned pointer is an
 Documentation/driver-api/device-io.rst:428:``__iomem`` token. memremap() can be used for mapping normal RAM that is outside
 Documentation/driver-api/device-io.rst-429-of the linear kernel memory area to a regular pointer.
 --
 Documentation/gpu/rfc/gpusvm.rst=7=Agreed upon design principles
 --
 Documentation/gpu/rfc/gpusvm.rst-53-	* This does not work, as no pointers from physical memory to virtual
 Documentation/gpu/rfc/gpusvm.rst:54:	  memory should exist. mremap() is an example of the core MM updating
 Documentation/gpu/rfc/gpusvm.rst-55-	  the virtual address without notifying the driver of address
 --
 arch/arm/include/asm/dmi.h-8-
 arch/arm/include/asm/dmi.h:9:#define dmi_early_remap(x, l)		memremap(x, l, MEMREMAP_WB)
 arch/arm/include/asm/dmi.h-10-#define dmi_early_unmap(x, l)		memunmap(x)
 arch/arm/include/asm/dmi.h:11:#define dmi_remap(x, l)			memremap(x, l, MEMREMAP_WB)
 arch/arm/include/asm/dmi.h-12-#define dmi_unmap(x)			memunmap(x)
 --
 arch/arm/kernel/process.c=380=extern struct page *get_signal_page(void);
 arch/arm/kernel/process.c-381-
 arch/arm/kernel/process.c:382:static int sigpage_mremap(const struct vm_special_mapping *sm,
 arch/arm/kernel/process.c-383-		struct vm_area_struct *new_vma)
 --
 arch/arm/kernel/vdso.c=35=unsigned int vdso_total_pages __ro_after_init;
 arch/arm/kernel/vdso.c-36-
 arch/arm/kernel/vdso.c:37:static int vdso_mremap(const struct vm_special_mapping *sm,
 arch/arm/kernel/vdso.c-38-		struct vm_area_struct *new_vma)
 --
 arch/arm64/kernel/acpi.c=72=static bool __init dt_is_stub(void)
 --
 arch/arm64/kernel/acpi.c-91- * __acpi_map_table() will be called before page_init(), so early_ioremap()
 arch/arm64/kernel/acpi.c:92: * or early_memremap() should be called here to for ACPI table mapping.
 arch/arm64/kernel/acpi.c-93- */
 arch/arm64/kernel/acpi.c=94=void __init __iomem *__acpi_map_table(unsigned long phys, unsigned long size)
 --
 arch/arm64/kernel/acpi.c-98-
 arch/arm64/kernel/acpi.c:99:	return early_memremap(phys, size);
 arch/arm64/kernel/acpi.c-100-}
 --
 arch/arm64/kernel/crash_dump.c=15=ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn,
 --
 arch/arm64/kernel/crash_dump.c-22-
 arch/arm64/kernel/crash_dump.c:23:	vaddr = memremap(__pfn_to_phys(pfn), PAGE_SIZE, MEMREMAP_WB);
 arch/arm64/kernel/crash_dump.c-24-	if (!vaddr)
 --
 arch/arm64/kernel/paravirt.c=94=static int stolen_time_cpu_online(unsigned int cpu)
 --
 arch/arm64/kernel/paravirt.c-106-
 arch/arm64/kernel/paravirt.c:107:	kaddr = memremap(res.a0,
 arch/arm64/kernel/paravirt.c-108-			      sizeof(struct pvclock_vcpu_stolen_time),
 --
 arch/arm64/kernel/vdso.c=45=static struct vdso_abi_info vdso_info[] __ro_after_init = {
 --
 arch/arm64/kernel/vdso.c-59-
 arch/arm64/kernel/vdso.c:60:static int vdso_mremap(const struct vm_special_mapping *sm,
 arch/arm64/kernel/vdso.c-61-		struct vm_area_struct *new_vma)
 --
 arch/arm64/kernel/vdso.c=157=static struct page *aarch32_sig_page __ro_after_init;
 arch/arm64/kernel/vdso.c-158-
 arch/arm64/kernel/vdso.c:159:static int aarch32_sigpage_mremap(const struct vm_special_mapping *sm,
 arch/arm64/kernel/vdso.c-160-				  struct vm_area_struct *new_vma)
 --
 arch/loongarch/kernel/acpi.c=38=void __init __iomem * __acpi_map_table(unsigned long phys, unsigned long size)
 --
 arch/loongarch/kernel/acpi.c-43-
 arch/loongarch/kernel/acpi.c:44:	return early_memremap(phys, size);
 arch/loongarch/kernel/acpi.c-45-}
 --
 arch/loongarch/kernel/crash_dump.c=6=ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn,
 --
 arch/loongarch/kernel/crash_dump.c-13-
 arch/loongarch/kernel/crash_dump.c:14:	vaddr = memremap(__pfn_to_phys(pfn), PAGE_SIZE, MEMREMAP_WB);
 arch/loongarch/kernel/crash_dump.c-15-	if (!vaddr)
 --
 arch/loongarch/kernel/efi.c=82=static void __init init_screen_info(void)
 --
 arch/loongarch/kernel/efi.c-88-
 arch/loongarch/kernel/efi.c:89:	si = early_memremap(screen_info_table, sizeof(*si));
 arch/loongarch/kernel/efi.c-90-	if (!si) {
 --
 arch/loongarch/kernel/efi.c=101=void __init efi_init(void)
 --
 arch/loongarch/kernel/efi.c-124-	size = sizeof(efi_config_table_t);
 arch/loongarch/kernel/efi.c:125:	config_tables = early_memremap(efi_config_table, efi_nr_tables * size);
 arch/loongarch/kernel/efi.c-126-	efi_config_parse_tables(config_tables, efi_systab->nr_tables, arch_tables);
 --
 arch/loongarch/kernel/vdso.c=26=extern char vdso_start[], vdso_end[];
 arch/loongarch/kernel/vdso.c-27-
 arch/loongarch/kernel/vdso.c:28:static int vdso_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma)
 arch/loongarch/kernel/vdso.c-29-{
 --
 arch/loongarch/mm/ioremap.c=19=void * __init early_memremap_ro(resource_size_t phys_addr, unsigned long size)
 arch/loongarch/mm/ioremap.c-20-{
 arch/loongarch/mm/ioremap.c:21:	return early_memremap(phys_addr, size);
 arch/loongarch/mm/ioremap.c-22-}
 --
 arch/loongarch/mm/ioremap.c=24=void * __init early_memremap_prot(resource_size_t phys_addr, unsigned long size,
 --
 arch/loongarch/mm/ioremap.c-26-{
 arch/loongarch/mm/ioremap.c:27:	return early_memremap(phys_addr, size);
 arch/loongarch/mm/ioremap.c-28-}
 --
 arch/mips/vdso/genvdso.c=207=int main(int argc, char **argv)
 --
 arch/mips/vdso/genvdso.c-261-	fprintf(out_file, "#include <asm/vdso.h>\n");
 arch/mips/vdso/genvdso.c:262:	fprintf(out_file, "static int vdso_mremap(\n");
 arch/mips/vdso/genvdso.c-263-	fprintf(out_file, "	const struct vm_special_mapping *sm,\n");
 --
 arch/parisc/kernel/vdso.c=27=extern char vdso64_start, vdso64_end;
 arch/parisc/kernel/vdso.c-28-
 arch/parisc/kernel/vdso.c:29:static int vdso_mremap(const struct vm_special_mapping *sm,
 arch/parisc/kernel/vdso.c-30-		       struct vm_area_struct *vma)
 --
 arch/powerpc/kernel/vdso.c=42=extern char vdso64_start, vdso64_end;
 arch/powerpc/kernel/vdso.c-43-
 arch/powerpc/kernel/vdso.c:44:static int vdso_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma,
 arch/powerpc/kernel/vdso.c-45-		       unsigned long text_size)
 --
 arch/powerpc/kernel/vdso.c-56-
 arch/powerpc/kernel/vdso.c:57:static int vdso32_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma)
 arch/powerpc/kernel/vdso.c-58-{
 arch/powerpc/kernel/vdso.c:59:	return vdso_mremap(sm, new_vma, &vdso32_end - &vdso32_start);
 arch/powerpc/kernel/vdso.c-60-}
 arch/powerpc/kernel/vdso.c-61-
 arch/powerpc/kernel/vdso.c:62:static int vdso64_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma)
 arch/powerpc/kernel/vdso.c-63-{
 arch/powerpc/kernel/vdso.c:64:	return vdso_mremap(sm, new_vma, &vdso64_end - &vdso64_start);
 arch/powerpc/kernel/vdso.c-65-}
 --
 arch/powerpc/kernel/vdso.c=67=static void vdso_close(const struct vm_special_mapping *sm, struct vm_area_struct *vma)
 --
 arch/powerpc/kernel/vdso.c-71-	/*
 arch/powerpc/kernel/vdso.c:72:	 * close() is called for munmap() but also for mremap(). In the mremap()
 arch/powerpc/kernel/vdso.c:73:	 * case the vdso pointer has already been updated by the mremap() hook
 arch/powerpc/kernel/vdso.c-74-	 * above, so it must not be set to NULL here.
 --
 arch/powerpc/mm/book3s64/radix_tlb.c=1018=static inline void __radix__flush_tlb_range(struct mm_struct *mm,
 --
 arch/powerpc/mm/book3s64/radix_tlb.c-1046-	 * flush check the range is more than PMD and force a pwc flush
 arch/powerpc/mm/book3s64/radix_tlb.c:1047:	 * mremap() depends on this behaviour.
 arch/powerpc/mm/book3s64/radix_tlb.c-1048-	 */
 --
 arch/riscv/include/asm/dmi.h-17-
 arch/riscv/include/asm/dmi.h:18:#define dmi_early_remap(x, l)		memremap(x, l, MEMREMAP_WB)
 arch/riscv/include/asm/dmi.h-19-#define dmi_early_unmap(x, l)		memunmap(x)
 arch/riscv/include/asm/dmi.h:20:#define dmi_remap(x, l)			memremap(x, l, MEMREMAP_WB)
 arch/riscv/include/asm/dmi.h-21-#define dmi_unmap(x)			memunmap(x)
 --
 arch/riscv/kernel/acpi.c=202=struct acpi_madt_rintc *acpi_cpu_get_madt_rintc(int cpu)
 --
 arch/riscv/kernel/acpi.c-208- * __acpi_map_table() will be called before paging_init(), so early_ioremap()
 arch/riscv/kernel/acpi.c:209: * or early_memremap() should be called here to for ACPI table mapping.
 arch/riscv/kernel/acpi.c-210- */
 arch/riscv/kernel/acpi.c=211=void __init __iomem *__acpi_map_table(unsigned long phys, unsigned long size)
 --
 arch/riscv/kernel/acpi.c-215-
 arch/riscv/kernel/acpi.c:216:	return early_memremap(phys, size);
 arch/riscv/kernel/acpi.c-217-}
 --
 arch/riscv/kernel/crash_dump.c=12=ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn,
 --
 arch/riscv/kernel/crash_dump.c-19-
 arch/riscv/kernel/crash_dump.c:20:	vaddr = memremap(__pfn_to_phys(pfn), PAGE_SIZE, MEMREMAP_WB);
 arch/riscv/kernel/crash_dump.c-21-	if (!vaddr)
 --
 arch/riscv/kernel/vdso.c=33=static struct __vdso_info compat_vdso_info;
 --
 arch/riscv/kernel/vdso.c-35-
 arch/riscv/kernel/vdso.c:36:static int vdso_mremap(const struct vm_special_mapping *sm,
 arch/riscv/kernel/vdso.c-37-		       struct vm_area_struct *new_vma)
 --
 arch/s390/kernel/vdso.c=25=extern char vdso_start[], vdso_end[];
 arch/s390/kernel/vdso.c-26-
 arch/s390/kernel/vdso.c:27:static int vdso_mremap(const struct vm_special_mapping *sm,
 arch/s390/kernel/vdso.c-28-		       struct vm_area_struct *vma)
 --
 arch/sparc/kernel/sys_sparc_64.c=495=SYSCALL_DEFINE5(64_mremap, unsigned long, addr,	unsigned long, old_len,
 --
 arch/sparc/kernel/sys_sparc_64.c-500-		return -EINVAL;
 arch/sparc/kernel/sys_sparc_64.c:501:	return sys_mremap(addr, old_len, new_len, flags, new_addr);
 arch/sparc/kernel/sys_sparc_64.c-502-}
 --
 arch/sparc/kernel/systbls.h=38=asmlinkage long sys64_munmap(unsigned long addr, size_t len);
 arch/sparc/kernel/systbls.h:39:asmlinkage unsigned long sys64_mremap(unsigned long addr,
 arch/sparc/kernel/systbls.h-40-				      unsigned long old_len,
 --
 arch/um/os-Linux/file.c=716=void *os_mremap_rw_shared(void *old_addr, size_t old_size, size_t new_size)
 --
 arch/um/os-Linux/file.c-719-
 arch/um/os-Linux/file.c:720:	res = mremap(old_addr, old_size, new_size, MREMAP_MAYMOVE, NULL);
 arch/um/os-Linux/file.c-721-
 --
 arch/x86/entry/vdso/vma.c=65=static void vdso_fix_landing(const struct vdso_image *image,
 --
 arch/x86/entry/vdso/vma.c-79-
 arch/x86/entry/vdso/vma.c:80:static int vdso_mremap(const struct vm_special_mapping *sm,
 arch/x86/entry/vdso/vma.c-81-		struct vm_area_struct *new_vma)
 --
 arch/x86/hyperv/hv_init.c=118=static int hv_cpu_init(unsigned int cpu)
 --
 arch/x86/hyperv/hv_init.c-137-		rdmsrq(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64);
 arch/x86/hyperv/hv_init.c:138:		*hvp = memremap(msr.pfn << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT,
 arch/x86/hyperv/hv_init.c-139-				PAGE_SIZE, MEMREMAP_WB);
 --
 arch/x86/hyperv/hv_init.c=451=void __init hyperv_init(void)
 --
 arch/x86/hyperv/hv_init.c-553-		pg = vmalloc_to_page(hv_hypercall_pg);
 arch/x86/hyperv/hv_init.c:554:		src = memremap(hypercall_msr.guest_physical_address << PAGE_SHIFT, PAGE_SIZE,
 arch/x86/hyperv/hv_init.c-555-				MEMREMAP_WB);
 --
 arch/x86/include/asm/dmi.h=11=static __always_inline __init void *dmi_alloc(unsigned len)
 --
 arch/x86/include/asm/dmi.h-18-#define dmi_early_unmap		early_memunmap
 arch/x86/include/asm/dmi.h:19:#define dmi_remap(_x, _l)	memremap(_x, _l, MEMREMAP_WB)
 arch/x86/include/asm/dmi.h-20-#define dmi_unmap(_x)		memunmap(_x)
 --
 arch/x86/kernel/acpi/boot.c=102=static u32 isa_irq_to_gsi[NR_IRQS_LEGACY] __read_mostly = {
 --
 arch/x86/kernel/acpi/boot.c-106-/*
 arch/x86/kernel/acpi/boot.c:107: * This is just a simple wrapper around early_memremap(),
 arch/x86/kernel/acpi/boot.c-108- * with sanity checks for phys == 0 and size == 0.
 --
 arch/x86/kernel/acpi/boot.c=110=void __init __iomem *__acpi_map_table(unsigned long phys, unsigned long size)
 --
 arch/x86/kernel/acpi/boot.c-115-
 arch/x86/kernel/acpi/boot.c:116:	return early_memremap(phys, size);
 arch/x86/kernel/acpi/boot.c-117-}
 --
 arch/x86/kernel/acpi/madt_wakeup.c=129=static int acpi_wakeup_cpu(u32 apicid, unsigned long start_ip, unsigned int cpu)
 --
 arch/x86/kernel/acpi/madt_wakeup.c-142-	if (!acpi_mp_wake_mailbox) {
 arch/x86/kernel/acpi/madt_wakeup.c:143:		acpi_mp_wake_mailbox = memremap(acpi_mp_wake_mailbox_paddr,
 arch/x86/kernel/acpi/madt_wakeup.c-144-						sizeof(*acpi_mp_wake_mailbox),
 --
 arch/x86/kernel/cpu/sgx/main.c=625=static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
 --
 arch/x86/kernel/cpu/sgx/main.c-631-
 arch/x86/kernel/cpu/sgx/main.c:632:	section->virt_addr = memremap(phys_addr, size, MEMREMAP_WB);
 arch/x86/kernel/cpu/sgx/main.c-633-	if (!section->virt_addr)
 --
 arch/x86/kernel/devicetree.c=292=void __init x86_flattree_get_config(void)
 --
 arch/x86/kernel/devicetree.c-300-
 arch/x86/kernel/devicetree.c:301:		dt = early_memremap(initial_dtb, map_len);
 arch/x86/kernel/devicetree.c-302-		size = fdt_totalsize(dt);
 --
 arch/x86/kernel/devicetree.c-304-			early_memunmap(dt, map_len);
 arch/x86/kernel/devicetree.c:305:			dt = early_memremap(initial_dtb, size);
 arch/x86/kernel/devicetree.c-306-			map_len = size;
 --
 arch/x86/kernel/e820.c=725=void __init e820__memory_setup_extended(u64 phys_addr, u32 data_len)
 --
 arch/x86/kernel/e820.c-730-
 arch/x86/kernel/e820.c:731:	sdata = early_memremap(phys_addr, data_len);
 arch/x86/kernel/e820.c-732-	entries = sdata->len / sizeof(*extmap);
 --
 arch/x86/kernel/eisa.c=12=static __init int eisa_bus_probe(void)
 --
 arch/x86/kernel/eisa.c-18-
 arch/x86/kernel/eisa.c:19:	p = memremap(0x0FFFD9, 4, MEMREMAP_WB);
 arch/x86/kernel/eisa.c-20-	if (p && *p == 'E' + ('I' << 8) + ('S' << 16) + ('A' << 24))
 --
 arch/x86/kernel/jailhouse.c=198=static void __init jailhouse_init_platform(void)
 --
 arch/x86/kernel/jailhouse.c-223-	while (pa_data) {
 arch/x86/kernel/jailhouse.c:224:		mapping = early_memremap(pa_data, sizeof(header));
 arch/x86/kernel/jailhouse.c-225-		memcpy(&header, mapping, sizeof(header));
 --
 arch/x86/kernel/jailhouse.c-243-			       (unsigned long)header.len);
 arch/x86/kernel/jailhouse.c:244:	mapping = early_memremap(pa_data, setup_data_len);
 arch/x86/kernel/jailhouse.c-245-	memcpy(&setup_data, mapping, setup_data_len);
 --
 arch/x86/kernel/kdebugfs.c=29=static ssize_t setup_data_read(struct file *file, char __user *user_buf,
 --
 arch/x86/kernel/kdebugfs.c-52-
 arch/x86/kernel/kdebugfs.c:53:	p = memremap(pa, count, MEMREMAP_WB);
 arch/x86/kernel/kdebugfs.c-54-	if (!p)
 --
 arch/x86/kernel/kdebugfs.c=89=static int __init create_setup_data_nodes(struct dentry *parent)
 --
 arch/x86/kernel/kdebugfs.c-110-
 arch/x86/kernel/kdebugfs.c:111:		data = memremap(pa_data, sizeof(*data), MEMREMAP_WB);
 arch/x86/kernel/kdebugfs.c-112-		if (!data) {
 --
 arch/x86/kernel/kdebugfs.c-121-			memunmap(data);
 arch/x86/kernel/kdebugfs.c:122:			data = memremap(pa_data, len, MEMREMAP_WB);
 arch/x86/kernel/kdebugfs.c-123-			if (!data) {
 --
 arch/x86/kernel/ksysfs.c=70=static int get_setup_data_paddr(int nr, u64 *paddr)
 --
 arch/x86/kernel/ksysfs.c-80-		}
 arch/x86/kernel/ksysfs.c:81:		data = memremap(pa_data, sizeof(*data), MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-82-		if (!data)
 --
 arch/x86/kernel/ksysfs.c=92=static int __init get_setup_data_size(int nr, size_t *size)
 --
 arch/x86/kernel/ksysfs.c-100-	while (pa_data) {
 arch/x86/kernel/ksysfs.c:101:		data = memremap(pa_data, sizeof(*data), MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-102-		if (!data)
 --
 arch/x86/kernel/ksysfs.c-109-				memunmap(data);
 arch/x86/kernel/ksysfs.c:110:				data = memremap(pa_data, len, MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-111-				if (!data)
 --
 arch/x86/kernel/ksysfs.c=135=static ssize_t type_show(struct kobject *kobj,
 --
 arch/x86/kernel/ksysfs.c-150-		return ret;
 arch/x86/kernel/ksysfs.c:151:	data = memremap(paddr, sizeof(*data), MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-152-	if (!data)
 --
 arch/x86/kernel/ksysfs.c-157-		memunmap(data);
 arch/x86/kernel/ksysfs.c:158:		data = memremap(paddr, len, MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-159-		if (!data)
 --
 arch/x86/kernel/ksysfs.c=173=static ssize_t setup_data_data_read(struct file *fp,
 --
 arch/x86/kernel/ksysfs.c-191-		return ret;
 arch/x86/kernel/ksysfs.c:192:	data = memremap(paddr, sizeof(*data), MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-193-	if (!data)
 --
 arch/x86/kernel/ksysfs.c-198-		memunmap(data);
 arch/x86/kernel/ksysfs.c:199:		data = memremap(paddr, len, MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-200-		if (!data)
 --
 arch/x86/kernel/ksysfs.c-233-	ret = count;
 arch/x86/kernel/ksysfs.c:234:	p = memremap(paddr, len, MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-235-	if (!p) {
 --
 arch/x86/kernel/ksysfs.c=306=static int __init get_setup_data_total_num(u64 pa_data, int *nr)
 --
 arch/x86/kernel/ksysfs.c-313-		*nr += 1;
 arch/x86/kernel/ksysfs.c:314:		data = memremap(pa_data, sizeof(*data), MEMREMAP_WB);
 arch/x86/kernel/ksysfs.c-315-		if (!data) {
 --
 arch/x86/kernel/machine_kexec_64.c=101=map_efi_systab(struct x86_mapping_info *info, pgd_t *level4p)
 --
 arch/x86/kernel/machine_kexec_64.c-125-
 arch/x86/kernel/machine_kexec_64.c:126:	kaddr = memremap(mstart, mend - mstart, MEMREMAP_WB);
 arch/x86/kernel/machine_kexec_64.c-127-	if (!kaddr) {
 --
 arch/x86/kernel/mpparse.c=412=static unsigned long __init get_mpc_size(unsigned long physptr)
 --
 arch/x86/kernel/mpparse.c-416-
 arch/x86/kernel/mpparse.c:417:	mpc = early_memremap(physptr, PAGE_SIZE);
 arch/x86/kernel/mpparse.c-418-	size = mpc->length;
 --
 arch/x86/kernel/mpparse.c=425=static int __init check_physptr(struct mpf_intel *mpf, unsigned int early)
 --
 arch/x86/kernel/mpparse.c-430-	size = get_mpc_size(mpf->physptr);
 arch/x86/kernel/mpparse.c:431:	mpc = early_memremap(mpf->physptr, size);
 arch/x86/kernel/mpparse.c-432-
 --
 arch/x86/kernel/mpparse.c=477=static __init void mpparse_get_smp_config(unsigned int early)
 --
 arch/x86/kernel/mpparse.c-496-
 arch/x86/kernel/mpparse.c:497:	mpf = early_memremap(mpf_base, sizeof(*mpf));
 arch/x86/kernel/mpparse.c-498-	if (!mpf) {
 --
 arch/x86/kernel/mpparse.c=557=static int __init smp_scan_config(unsigned long base, unsigned long length)
 --
 arch/x86/kernel/mpparse.c-566-	while (length > 0) {
 arch/x86/kernel/mpparse.c:567:		bp = early_memremap(base, length);
 arch/x86/kernel/mpparse.c-568-		mpf = (struct mpf_intel *)bp;
 --
 arch/x86/kernel/mpparse.c=835=static int __init update_mp_table(void)
 --
 arch/x86/kernel/mpparse.c-848-
 arch/x86/kernel/mpparse.c:849:	mpf = early_memremap(mpf_base, sizeof(*mpf));
 arch/x86/kernel/mpparse.c-850-	if (!mpf) {
 arch/x86/kernel/mpparse.c:851:		pr_err("MPTABLE: mpf early_memremap() failed\n");
 arch/x86/kernel/mpparse.c-852-		return 0;
 --
 arch/x86/kernel/mpparse.c-864-	size = get_mpc_size(mpf->physptr);
 arch/x86/kernel/mpparse.c:865:	mpc = early_memremap(mpf->physptr, size);
 arch/x86/kernel/mpparse.c-866-	if (!mpc) {
 arch/x86/kernel/mpparse.c:867:		pr_err("MPTABLE: mpc early_memremap() failed\n");
 arch/x86/kernel/mpparse.c-868-		goto do_unmap_mpf;
 --
 arch/x86/kernel/mpparse.c-895-	} else {
 arch/x86/kernel/mpparse.c:896:		mpc_new = early_memremap(mpc_new_phys, mpc_new_length);
 arch/x86/kernel/mpparse.c-897-		if (!mpc_new) {
 arch/x86/kernel/mpparse.c:898:			pr_err("MPTABLE: new mpc early_memremap() failed\n");
 arch/x86/kernel/mpparse.c-899-			goto do_unmap_mpc;
 --
 arch/x86/kernel/mpparse.c-909-			/* steal 16 bytes from [0, 1k) */
 arch/x86/kernel/mpparse.c:910:			mpf_new = early_memremap(0x400 - 16, sizeof(*mpf_new));
 arch/x86/kernel/mpparse.c-911-			if (!mpf_new) {
 arch/x86/kernel/mpparse.c:912:				pr_err("MPTABLE: new mpf early_memremap() failed\n");
 arch/x86/kernel/mpparse.c-913-				goto do_unmap_mpc;
 --
 arch/x86/kernel/setup.c=402=static void __init add_early_ima_buffer(u64 phys_addr)
 --
 arch/x86/kernel/setup.c-406-
 arch/x86/kernel/setup.c:407:	data = early_memremap(phys_addr + sizeof(struct setup_data), sizeof(*data));
 arch/x86/kernel/setup.c-408-	if (!data) {
 --
 arch/x86/kernel/setup.c=452=static void __init add_kho(u64 phys_addr, u32 data_len)
 --
 arch/x86/kernel/setup.c-462-
 arch/x86/kernel/setup.c:463:	kho = early_memremap(addr, size);
 arch/x86/kernel/setup.c-464-	if (!kho) {
 --
 arch/x86/kernel/setup.c=475=static void __init parse_setup_data(void)
 --
 arch/x86/kernel/setup.c-483-
 arch/x86/kernel/setup.c:484:		data = early_memremap(pa_data, sizeof(*data));
 arch/x86/kernel/setup.c-485-		data_len = data->len + sizeof(struct setup_data);
 --
 arch/x86/kernel/setup.c-506-		case SETUP_RNG_SEED:
 arch/x86/kernel/setup.c:507:			data = early_memremap(pa_data, data_len);
 arch/x86/kernel/setup.c-508-			add_bootloader_randomness(data->data, data->len);
 --
 arch/x86/kernel/setup.c=564=static void __init memblock_x86_reserve_range_setup_data(void)
 --
 arch/x86/kernel/setup.c-572-	while (pa_data) {
 arch/x86/kernel/setup.c:573:		data = early_memremap(pa_data, sizeof(*data));
 arch/x86/kernel/setup.c-574-		if (!data) {
 --
 arch/x86/kernel/setup.c-586-			early_memunmap(data, sizeof(*data));
 arch/x86/kernel/setup.c:587:			data = early_memremap(pa_data, len);
 arch/x86/kernel/setup.c-588-			if (!data) {
 --
 arch/x86/kernel/uprobes.c=611=static void riprel_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
 --
 arch/x86/kernel/uprobes.c-620-
 arch/x86/kernel/uprobes.c:621:static int tramp_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma)
 arch/x86/kernel/uprobes.c-622-{
 --
 arch/x86/mm/ioremap.c=517=void *xlate_dev_mem_ptr(phys_addr_t phys)
 --
 arch/x86/mm/ioremap.c-522-
 arch/x86/mm/ioremap.c:523:	/* memremap() maps if RAM, otherwise falls back to ioremap() */
 arch/x86/mm/ioremap.c:524:	vaddr = memremap(start, PAGE_SIZE, MEMREMAP_WB);
 arch/x86/mm/ioremap.c-525-
 arch/x86/mm/ioremap.c:526:	/* Only add the offset on success and return NULL if memremap() failed */
 arch/x86/mm/ioremap.c-527-	if (vaddr)
 --
 arch/x86/mm/ioremap.c=641=static bool __ref __memremap_is_setup_data(resource_size_t phys_addr, bool early)
 --
 arch/x86/mm/ioremap.c-657-		else
 arch/x86/mm/ioremap.c:658:			data = memremap(paddr, setup_data_sz, MEMREMAP_WB | MEMREMAP_DEC);
 arch/x86/mm/ioremap.c-659-		if (!data) {
 --
 arch/x86/mm/ioremap.c-684-				memunmap(data);
 arch/x86/mm/ioremap.c:685:				data = memremap(paddr, size, MEMREMAP_WB | MEMREMAP_DEC);
 arch/x86/mm/ioremap.c-686-			}
 --
 arch/x86/mm/ioremap.c=729=bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned long size,
 --
 arch/x86/mm/ioremap.c-751- * Architecture override of __weak function to adjust the protection attributes
 arch/x86/mm/ioremap.c:752: * used when remapping memory. By default, early_memremap() will map the data
 arch/x86/mm/ioremap.c-753- * as encrypted. Determine if an encrypted mapping should not be done and set
 --
 arch/x86/pci/common.c=643=int pcibios_device_add(struct pci_dev *dev)
 --
 arch/x86/pci/common.c-651-	while (pa_data) {
 arch/x86/pci/common.c:652:		data = memremap(pa_data, sizeof(*rom), MEMREMAP_WB);
 arch/x86/pci/common.c-653-		if (!data)
 --
 arch/x86/platform/efi/efi.c=436=static int __init efi_config_init(const efi_config_table_type_t *arch_tables)
 --
 arch/x86/platform/efi/efi.c-451-	 */
 arch/x86/platform/efi/efi.c:452:	config_tables = early_memremap(efi_config_table, efi_nr_tables * sz);
 arch/x86/platform/efi/efi.c-453-	if (config_tables == NULL) {
 --
]
]

27/2 2026/03/11 11:14 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 47994
							output: 18
							thoughts: 21
						

Thoughts:
Okay, here's my thought process laid out, assuming I'm an expert:

**Investigating `mremap` and its Documentation**

Alright, time to get to work. First, I need to understand `mremap` properly. I'll start by making sure I'm looking at the correct place, which I believe is in `sys.txt`. I'll use `read-description` which should tell me precisely what `mremap` is and does. It's crucial to verify I'm referencing the appropriate documentation file and its content. This initial check will give me a proper grounding and ensure I'm heading in the correct direction.



28/2 2026/03/11 11:14 tool read-description
0m Args:
map[File:sys.txt]

Results:
map[Output:# Copyright 2015 syzkaller project authors. All rights reserved.
# Use of this source code is governed by Apache 2 LICENSE that can be found in the LICENSE file.

# Description of syscall arguments. See README.md for details.

include <linux/socket.h>
include <linux/ptrace.h>
include <linux/resource.h>
include <linux/stat.h>
include <linux/mman.h>
include <linux/time.h>
include <linux/mount.h>
include <linux/fs.h>
include <linux/eventpoll.h>
include <linux/signalfd.h>
include <linux/eventfd.h>
include <linux/timerfd.h>
include <linux/personality.h>
include <linux/wait.h>
include <linux/user.h>
include <linux/un.h>
include <linux/ioctl.h>
include <linux/fadvise.h>
include <linux/falloc.h>
include <linux/kexec.h>
include <linux/elf.h>
include <linux/fiemap.h>
include <linux/kd.h>
include <linux/vt.h>
include <linux/if_alg.h>
include <linux/nfc.h>
include <linux/sockios.h>
include <linux/net_tstamp.h>
include <linux/termios.h>
include <linux/fcntl.h>
include <linux/sched.h>
include <linux/mqueue.h>
include <linux/mempolicy.h>
include <linux/in.h>
include <linux/ip.h>
include <linux/tcp.h>
include <linux/udp.h>
include <linux/kcmp.h>
include <linux/nfs.h>
include <linux/syslog.h>
include <linux/exportfs.h>
include <linux/splice.h>
include <linux/filelock.h>
include <fs/overlayfs/overlayfs.h>
include <uapi/linux/memfd.h>
include <uapi/linux/module.h>
include <linux/ioprio.h>
include <linux/membarrier.h>
include <uapi/linux/kcov.h>
include <uapi/linux/membarrier.h>
include <uapi/linux/capability.h>
include <uapi/linux/seccomp.h>
include <uapi/linux/wait.h>
include <uapi/linux/watch_queue.h>
include <kernel/sched/sched.h>
include <uapi/linux/close_range.h>
include <uapi/linux/netfilter/xt_cgroup.h>
include <fs/smb/client/cifsglob.h>
include <xen/interface/io/xs_wire.h>
include <uapi/linux/prctl.h>
include <asm/prctl.h>
include <linux/reboot.h>
include <linux/swap.h>

resource fd[int32]: -1
resource fd_dir[fd]: AT_FDCWD

# alignptr/align32/align64/padto64 can be used when ABI uses int64/intptr to hold a smaller type.
# E.g. pid/uid stored as intptr/int64.
type alignptr[T] {
	v	T
} [align[PTR_SIZE]]

type align32[T] {
	v	T
} [align[4]]

type align64[T] {
	v	T
} [align[8]]

type padto32[T] {
	v	T
} [size[4]]

type padto64[T] {
	v	T
} [size[8]]

type signalno int32[0:65]
type signalnoptr intptr[0:65]

# syz_execute_func caused multiple problems:
# 1. First it lead to corpus explosion. The program used existing values in registers
# to pollute output area. We tried to zero registers (though, not reliably).
# 2. It lead to explosion again. The exact mechanics are unknown, here is one sample:
# syz_execute_func(&(0x7f0000000440)="f2af91930f0124eda133fa20430fbafce842f66188d0d4
#	430fc7f314c1ab5bf9e2f9660f3a0fae5e090000ba023c1fb63ac4817d73d74ec482310d46f44
#	9f216c863fa438036a91bdbae95aaaa420f383c02c401405c6bfd49d768d768f833fefbab6464
#	660f38323c8f26dbc1a1fe5ff6f6df0804f4c4efa59c0f01c4288ba6452e000054c4431d5cc100")
# 3. The code can also execute syscalls (and it is know to), but it's not subject to
# target.SanitizeCall. As the result it can do things that programs are not supposed to do.
# 4. Besides linux, corpus explosion also happens on freebsd and is clearly attributable
# to syz_execute_func based on corpus contents. Mechanics are also not known.
# It also did not cause finding of any new bugs (at least not that I know of).
# So it's disabled on all OSes until we figure out how to resolve all these problems.
syz_execute_func(text ptr[in, text[target]]) (disabled)

# Exclude /sys/power/state as reported in https://lkml.org/lkml/2021/5/27/653
openat$sysfs(fd const[AT_FDCWD], dir ptr[in, glob["/sys/**/*:-/sys/power/state"]], flags flags[open_flags], mode flags[open_mode]) fd

open(file ptr[in, filename], flags flags[open_flags], mode flags[open_mode]) fd (automatic_helper)
# Just so that we have something that creates fd_dir resources.
open$dir(file ptr[in, filename], flags flags[open_flags], mode flags[open_mode]) fd_dir (automatic_helper)
openat$dir(fd const[AT_FDCWD], file ptr[in, filename], flags flags[open_flags], mode flags[open_mode]) fd_dir (automatic_helper)
openat(fd fd_dir[opt], file ptr[in, filename], flags flags[open_flags], mode flags[open_mode]) fd
openat2$dir(fd const[AT_FDCWD], file ptr[in, filename], how ptr[in, open_how], size bytesize[how]) fd_dir (automatic_helper)
openat2(fd fd_dir[opt], file ptr[in, filename], how ptr[in, open_how], size bytesize[how]) fd
creat(file ptr[in, filename], mode flags[open_mode]) fd
close(fd fd)
read(fd fd, buf buffer[out], count len[buf])
pread64(fd fd, buf buffer[out], count len[buf], pos fileoff)
readv(fd fd, vec ptr[in, array[iovec_out]], vlen len[vec])
preadv(fd fd, vec ptr[in, array[iovec_out]], vlen len[vec], off_low int32, off_high int32)
preadv2(fd fd, vec ptr[in, array[iovec_out]], vlen len[vec], off_low int32, off_high int32, flags flags[rwf_flags])
write(fd fd, buf buffer[in], count len[buf])
pwrite64(fd fd, buf buffer[in], count len[buf], pos fileoff)
writev(fd fd, vec ptr[in, array[iovec_in]], vlen len[vec])
pwritev(fd fd, vec ptr[in, array[iovec_in]], vlen len[vec], off_low int32, off_high int32)
pwritev2(fd fd, vec ptr[in, array[iovec_in]], vlen len[vec], off_low int32, off_high int32, flags flags[rwf_flags])
# lseek can return negative offsets when the file has FMODE_UNSIGNED_OFFSET flag:
# https://elixir.bootlin.com/linux/v6.11-rc7/source/fs/read_write.c#L56
# It's unclear how it's then even possible to distinguish error from valid offset, but it's not our problem.
# We just have to ignore return values of lseek.
lseek(fd fd, offset fileoff, whence flags[seek_whence]) (ignore_return)
copy_file_range(fd_in fd, off_in ptr[inout, fileoff[int64], opt], fd_out fd, off_out ptr[inout, fileoff[int64], opt], len intptr, flags flags[copy_file_range_flags])

rwf_flags = RWF_DSYNC, RWF_HIPRI, RWF_SYNC, RWF_NOWAIT, RWF_APPEND
copy_file_range_flags = 0

dup(oldfd fd) fd
dup2(oldfd fd, newfd fd) fd
dup3(oldfd fd, newfd fd, flags flags[dup_flags]) fd

pipe(pipefd ptr[out, pipefd])
pipe2(pipefd ptr[out, pipefd], flags flags[pipe_flags])

tee(fdin fd, fdout fd, len intptr, f flags[splice_flags])
splice(fdin fd, offin ptr[in, fileoff[int64]], fdout fd, offout ptr[in, fileoff[int64]], len intptr, f flags[splice_flags])
vmsplice(fd fd, vec ptr[in, array[iovec_in]], vlen len[vec], f flags[splice_flags])
sendfile(fdout fd, fdin fd, off ptr[inout, fileoff[intptr], opt], count intptr)
sendfile64(fdout fd, fdin fd, off ptr[inout, fileoff[int64], opt], count intptr)

cachestat_range {
	off	fileoff[int64]
	len	int64
}

cachestat {
	nr_cache		int64
	nr_dirty		int64
	nr_writeback		int64
	nr_evicted		int64
	nr_recently_evicted	int64
}

stat(file ptr[in, filename], statbuf ptr[out, stat])
lstat(file ptr[in, filename], statbuf ptr[out, stat])
fstat(fd fd, statbuf ptr[out, stat])
newfstatat(dfd const[AT_FDCWD], file ptr[in, filename], statbuf ptr[out, stat], flag flags[statx_flags])
stat64(file ptr[in, filename], statbuf ptr[out, stat64])
lstat64(file ptr[in, filename], statbuf ptr[out, stat64])
fstat64(fd fd, statbuf ptr[out, stat64])
fstatat64(dfd const[AT_FDCWD], file ptr[in, filename], statbuf ptr[out, stat64], flag flags[statx_flags])
statx(fd fd_dir, file ptr[in, filename], flags flags[statx_flags], mask flags[statx_mask], statxbuf ptr[out, statx])
cachestat(fd fd, cstat_range ptr[in, cachestat_range], cstat ptr[out, cachestat], flags const[0])

poll(fds ptr[in, array[pollfd]], nfds len[fds], timeout int32)
ppoll(fds ptr[in, array[pollfd]], nfds len[fds], tsp ptr[in, timespec], sigmask ptr[in, sigset_t], size len[sigmask])
select(n len[inp], inp ptr[inout, fd_set], outp ptr[inout, fd_set], exp ptr[inout, fd_set], tvp ptr[inout, timeval])
pselect6(n len[inp], inp ptr[inout, fd_set], outp ptr[inout, fd_set], exp ptr[inout, fd_set], tvp ptr[inout, timespec], sig ptr[in, sigset_size])

resource fd_epoll[fd]
epoll_create(size int32) fd_epoll
epoll_create1(flags flags[epoll_flags]) fd_epoll
epoll_ctl$EPOLL_CTL_ADD(epfd fd_epoll, op const[EPOLL_CTL_ADD], fd fd, ev ptr[in, epoll_event])
epoll_ctl$EPOLL_CTL_MOD(epfd fd_epoll, op const[EPOLL_CTL_MOD], fd fd, ev ptr[in, epoll_event])
epoll_ctl$EPOLL_CTL_DEL(epfd fd_epoll, op const[EPOLL_CTL_DEL], fd fd)
epoll_wait(epfd fd_epoll, events ptr[out, array[epoll_event]], maxevents len[events], timeout int32)
epoll_pwait(epfd fd_epoll, events ptr[out, array[epoll_event]], maxevents len[events], timeout int32, sigmask ptr[in, sigset_t], size bytesize[sigmask])
epoll_pwait2(epfd fd_epoll, events ptr[out, array[epoll_event]], maxevents len[events], timeout ptr[in, timespec], sigmask ptr[in, sigset_t], size bytesize[sigmask])

resource fd_timer[fd]
signalfd(fd fd, mask ptr[in, sigset_t], size len[mask]) fd
signalfd4(fd fd, mask ptr[in, sigset_t], size len[mask], flags flags[signalfd_flags]) fd
timerfd_create(clockid flags[clock_type], flags flags[timerfd_create_flags]) fd_timer
timerfd_settime(fd fd_timer, flags flags[timerfd_settime_flags], new ptr[in, itimerspec], old ptr[out, itimerspec])
timerfd_gettime(fd fd_timer, cur ptr[out, itimerspec])
ioctl$TFD_IOC_SET_TICKS(fd fd_timer, cmd const[TFD_IOC_SET_TICKS], arg ptr[in, int64])

resource fd_event[fd]
eventfd(initval int32) fd_event
eventfd2(initval int32, flags flags[eventfd_flags]) fd_event
read$eventfd(fd fd_event, val ptr[out, int64], len len[val])
write$eventfd(fd fd_event, val ptr[in, int64], len len[val])

brk(brk intptr)
mmap(addr vma, len len[addr], prot flags[mmap_prot], flags flags[mmap_flags], fd fd, offset intptr[0:0xffffffff, 0x1000])
munmap(addr vma, len len[addr])
mremap(addr vma, len len[addr], newlen len[newaddr], flags flags[mremap_flags], newaddr vma)
remap_file_pages(addr vma, size len[addr], prot flags[mmap_prot], pgoff intptr, flags flags[mmap_flags])
mprotect(addr vma, len len[addr], prot flags[mmap_prot])
msync(addr vma, len len[addr], f flags[msync_flags])
madvise(addr vma, len len[addr], advice flags[madvise_flags])
process_madvise(pidfd fd_pidfd, vec ptr[in, array[iovec_in]], vlen len[vec], advice flags[madvise_flags], flags const[0])
process_mrelease(pidfd fd_pidfd, flags const[0])
fadvise64(fd fd, offset fileoff, len intptr, advice flags[fadvise_flags])
readahead(fd fd, off intptr, count intptr)
mbind(addr vma, len len[addr], mode flags[mbind_mode], nodemask ptr[in, int64], maxnode intptr, flags flags[mbind_flags])
move_pages(pid pid, nr len[pages], pages ptr[in, array[vma]], nodes ptr[in, array[int32], opt], status ptr[out, array[int32]], flags flags[move_pages_flags])
migrate_pages(pid pid, maxnode intptr, old ptr[in, int64], new ptr[in, int64])
set_mempolicy(mode flags[mbind_mode], nodemask ptr[in, int64], maxnode intptr)
get_mempolicy(mode ptr[out, int32], nodemask ptr[out, int64], maxnode intptr, addr vma, flags flags[mempolicy_flags])
set_mempolicy_home_node(addr vma, len len[addr], home_node intptr[0:3], flags const[0])
mincore(addr vma, size len[addr], vec buffer[out])
mlock(addr vma, size len[addr])
mlock2(addr vma, size len[addr], flags flags[mlock_flags])
munlock(addr vma, size len[addr])
mlockall(flags flags[mlockall_flags])
munlockall()
kcmp(pid1 pid, pid2 pid, type flags[kcmp_flags], fd1 fd, fd2 fd)
kcmp$KCMP_EPOLL_TFD(pid1 pid, pid2 pid, type const[KCMP_EPOLL_TFD], fd1 fd, idx2 ptr[in, kcmp_epoll_slot])

resource fd_memfd[fd]
memfd_create(name ptr[in, string], flags flags[memfd_flags]) fd_memfd
memfd_flags = MFD_CLOEXEC, MFD_ALLOW_SEALING, MFD_HUGETLB
_ = MFD_HUGE_SHIFT, MFD_HUGE_MASK, MFD_HUGE_64KB, MFD_HUGE_512KB, MFD_HUGE_1MB, MFD_HUGE_2MB, MFD_HUGE_8MB, MFD_HUGE_16MB

memfd_secret(flags flags[memfd_secret_flags]) fd
memfd_secret_flags = O_CLOEXEC

resource pkey[int32]: -1
pkey_alloc(flags const[0], val flags[pkey_flags]) pkey
pkey_free(key pkey)
pkey_mprotect(addr vma, len len[addr], prot flags[mmap_prot], key pkey)
syz_pkey_set(key pkey, val flags[pkey_flags])

pkey_flags = PKEY_DISABLE_ACCESS, PKEY_DISABLE_WRITE

restart_syscall()

# Almighty!
ioctl(fd fd, cmd int32, arg buffer[in])

ioctl$int_in(fd fd, cmd flags[ioctl_int_in], v ptr[in, int64])
ioctl$int_out(fd fd, cmd flags[ioctl_int_out], v ptr[out, intptr])

ioctl$FIOCLEX(fd fd, cmd const[FIOCLEX])
ioctl$FIONCLEX(fd fd, cmd const[FIONCLEX])
ioctl$FITHAW(fd fd, cmd const[FITHAW])

# FIFREEZE is enabled in sandbox mode only because it can easily kill the machine.
ioctl$FIFREEZE(fd fd, cmd const[FIFREEZE]) (snapshot)

fcntl$dupfd(fd fd, cmd flags[fcntl_dupfd], arg fd) fd
fcntl$getflags(fd fd, cmd flags[fcntl_getflags])
fcntl$setflags(fd fd, cmd const[F_SETFD], flags flags[fcntl_flags])
fcntl$setstatus(fd fd, cmd const[F_SETFL], flags flags[fcntl_status])
fcntl$lock(fd fd, cmd flags[fcntl_lock], lock ptr[in, flock])
fcntl$getown(fd fd, cmd const[F_GETOWN]) pid (automatic_helper)
fcntl$setown(fd fd, cmd const[F_SETOWN], pid pid)
fcntl$getownex(fd fd, cmd const[F_GETOWN_EX], arg ptr[out, f_owner_ex])
fcntl$setownex(fd fd, cmd const[F_SETOWN_EX], arg ptr[in, f_owner_ex])
fcntl$setsig(fd fd, cmd const[F_SETSIG], sig signalnoptr)
fcntl$setlease(fd fd, cmd const[F_SETLEASE], typ flags[flock_type])
fcntl$notify(fd fd, cmd const[F_NOTIFY], typ flags[fcntl_notify])
fcntl$setpipe(fd fd, cmd const[F_SETPIPE_SZ], sz intptr)
fcntl$addseals(fd fd, cmd const[F_ADD_SEALS], seals flags[seal_types])
fcntl$F_GET_RW_HINT(fd fd, cmd const[F_GET_RW_HINT], hint ptr[out, int64])
fcntl$F_GET_FILE_RW_HINT(fd fd, cmd const[F_GET_FILE_RW_HINT], hint ptr[out, int64])
fcntl$F_SET_RW_HINT(fd fd, cmd const[F_SET_RW_HINT], hint ptr[in, flags[fcntl_rw_hint, int64]])
fcntl$F_SET_FILE_RW_HINT(fd fd, cmd const[F_SET_FILE_RW_HINT], hint ptr[in, flags[fcntl_rw_hint, int64]])

# Only some commands break return values.
# When/if we have stricter enforcement of arguments, we may remove some of breaks_returns attributes.
ptrace(req flags[ptrace_req], pid pid) (breaks_returns)
ptrace$peek(req flags[ptrace_req_peek], pid pid, addr ptr[out, intptr]) (breaks_returns)
ptrace$poke(req flags[ptrace_req_poke], pid pid, addr ptr[out, intptr], data intptr) (breaks_returns)
ptrace$peekuser(req const[PTRACE_PEEKUSR], pid pid, addr intptr) (breaks_returns)
ptrace$pokeuser(req const[PTRACE_POKEUSR], pid pid, addr intptr, data intptr) (breaks_returns)
ptrace$getregs(req flags[ptrace_req_getregs], pid pid, ignored intptr, data buffer[out]) (breaks_returns)
ptrace$getregset(req const[PTRACE_GETREGSET], pid pid, what flags[pthread_regset], data ptr[in, iovec_out]) (breaks_returns)
ptrace$setregs(req flags[ptrace_req_setregs], pid pid, ignored intptr, data buffer[in]) (breaks_returns)
ptrace$setregset(req const[PTRACE_SETREGSET], pid pid, what flags[pthread_regset], data ptr[in, iovec_in]) (breaks_returns)
ptrace$getsig(req const[PTRACE_GETSIGINFO], pid pid, ignored intptr, data ptr[out, siginfo]) (breaks_returns)
ptrace$setsig(req const[PTRACE_SETSIGINFO], pid pid, ignored intptr, data ptr[in, siginfo]) (breaks_returns)
ptrace$setopts(req flags[ptrace_req_setopts], pid pid, ignored intptr, flags flags[ptrace_options]) (breaks_returns)
ptrace$getenv(req const[PTRACE_GETEVENTMSG], pid pid, ignored intptr, data ptr[out, intptr]) (breaks_returns)
ptrace$cont(req flags[ptrace_req_cont], pid pid, ignored intptr, data intptr) (breaks_returns)
ptrace$PTRACE_SECCOMP_GET_FILTER(req const[PTRACE_SECCOMP_GET_FILTER], pid pid, addr intptr, data ptr[out, array[int8]]) (breaks_returns)
ptrace$PTRACE_SECCOMP_GET_METADATA(req const[PTRACE_SECCOMP_GET_METADATA], pid pid, addr len[data], data ptr[in, seccomp_metadata]) (breaks_returns)
ptrace$PTRACE_SETSIGMASK(req const[PTRACE_SETSIGMASK], pid pid, size bytesize[data], data ptr[in, sigset_t]) (breaks_returns)
ptrace$PTRACE_GETSIGMASK(req const[PTRACE_GETSIGMASK], pid pid, size bytesize[data], data ptr[out, sigset_t]) (breaks_returns)
ptrace$ARCH_GET_FS(req const[PTRACE_ARCH_PRCTL], pid pid, arg ptr[out, intptr], code const[ARCH_GET_FS]) (breaks_returns)
ptrace$ARCH_GET_GS(req const[PTRACE_ARCH_PRCTL], pid pid, arg ptr[out, intptr], code const[ARCH_GET_GS]) (breaks_returns)
ptrace$ARCH_SET_GS(req const[PTRACE_ARCH_PRCTL], pid pid, arg ptr[out, intptr], code const[ARCH_SET_GS]) (breaks_returns)
ptrace$ARCH_GET_CPUID(req const[PTRACE_ARCH_PRCTL], pid pid, arg const[0], code const[ARCH_GET_CPUID]) (breaks_returns)
ptrace$ARCH_SET_CPUID(req const[PTRACE_ARCH_PRCTL], pid pid, arg boolptr, code const[ARCH_SET_CPUID]) (breaks_returns)
ptrace$ARCH_MAP_VDSO_X32(req const[PTRACE_ARCH_PRCTL], pid pid, arg intptr, code const[ARCH_MAP_VDSO_X32]) (breaks_returns)
ptrace$ARCH_MAP_VDSO_32(req const[PTRACE_ARCH_PRCTL], pid pid, arg intptr, code const[ARCH_MAP_VDSO_32]) (breaks_returns)
ptrace$ARCH_MAP_VDSO_64(req const[PTRACE_ARCH_PRCTL], pid pid, arg intptr, code const[ARCH_MAP_VDSO_64]) (breaks_returns)
ptrace$ARCH_SHSTK_STATUS(req const[PTRACE_ARCH_PRCTL], pid pid, res ptr[out, intptr], cmd const[ARCH_SHSTK_STATUS]) (breaks_returns)
ptrace$ARCH_SHSTK_LOCK(req const[PTRACE_ARCH_PRCTL], pid pid, features flags[shadow_stack_features], cmd const[ARCH_SHSTK_LOCK]) (breaks_returns)
ptrace$ARCH_SHSTK_UNLOCK(req const[PTRACE_ARCH_PRCTL], pid pid, features flags[shadow_stack_features], cmd const[ARCH_SHSTK_UNLOCK]) (breaks_returns)
ptrace$ARCH_SHSTK_ENABLE(req const[PTRACE_ARCH_PRCTL], pid pid, features flags[shadow_stack_features], cmd const[ARCH_SHSTK_ENABLE]) (breaks_returns)
ptrace$ARCH_SHSTK_DISABLE(req const[PTRACE_ARCH_PRCTL], pid pid, features flags[shadow_stack_features], cmd const[ARCH_SHSTK_DISABLE]) (breaks_returns)
ptrace$ARCH_GET_UNTAG_MASK(req const[PTRACE_ARCH_PRCTL], pid pid, arg ptr[out, intptr], cmd const[ARCH_GET_UNTAG_MASK]) (breaks_returns)
ptrace$ARCH_GET_MAX_TAG_BITS(req const[PTRACE_ARCH_PRCTL], pid pid, arg ptr[out, intptr], cmd const[ARCH_GET_MAX_TAG_BITS]) (breaks_returns)
ptrace$ARCH_ENABLE_TAGGED_ADDR(req const[PTRACE_ARCH_PRCTL], pid pid, arg intptr[1:6], cmd const[ARCH_ENABLE_TAGGED_ADDR]) (breaks_returns)
ptrace$ARCH_FORCE_TAGGED_SVA(req const[PTRACE_ARCH_PRCTL], pid pid, arg const[0], cmd const[ARCH_FORCE_TAGGED_SVA]) (breaks_returns)

map_shadow_stack(addr vma, size bytesize[addr], flags boolptr)

seccomp_metadata {
	filter_off	int64
	flags		const[0, int64]
}

ptrace_peeksiginfo_args {
	off	int64
	flags	flags[ptrace_peeksiginfo_flags, int32]
	nr	len[syscall:data, int32]
}

ptrace$peeksig(req const[PTRACE_PEEKSIGINFO], pid pid, args ptr[in, ptrace_peeksiginfo_args], data ptr[out, array[siginfo]])

capget(hdr ptr[in, cap_header], data ptr[in, cap_data])
capset(hdr ptr[in, cap_header], data ptr[in, cap_data])

resource fd_mq[fd]
mq_open(name ptr[in, string], flags flags[mq_open_flags], mode flags[open_mode], attr ptr[in, mq_attr]) fd_mq
mq_timedsend(mqd fd_mq, msg buffer[in], msglen len[msg], prio intptr, timeout ptr[in, timespec, opt])
mq_timedreceive(mqd fd_mq, msg buffer[out], msglen len[msg], prio intptr, timeout ptr[in, timespec, opt])
mq_notify(mqd fd_mq, notif ptr[in, sigevent])
mq_getsetattr(mqd fd_mq, attr ptr[in, mq_attr], oldattr ptr[out, mq_attr, opt])
mq_unlink(name ptr[in, string])

mknod(file ptr[in, filename], mode flags[mknod_mode], dev int32)
mknod$loop(file ptr[in, filename], mode flags[mknod_mode], dev proc[1792, 2])
mknodat$loop(dirfd fd_dir, file ptr[in, filename], mode flags[mknod_mode], dev proc[1792, 2])
mknodat$null(dirfd fd_dir, file ptr[in, filename], mode flags[mknod_mode], dev const[0x103])
mknodat(dirfd fd_dir, file ptr[in, filename], mode flags[mknod_mode], dev int32)
chmod(file ptr[in, filename], mode flags[open_mode])
fchmod(fd fd, mode flags[open_mode])
fchmodat(dirfd fd_dir, file ptr[in, filename], mode flags[open_mode])
chown(file ptr[in, filename], uid uid, gid gid)
lchown(file ptr[in, filename], uid uid, gid gid)
fchown(fd fd, uid uid, gid gid)
fchownat(dirfd fd_dir, file ptr[in, filename], uid uid, gid gid, flags flags[at_flags])
fallocate(fd fd, mode flags[fallocate_mode], off intptr, len intptr)
faccessat(dirfd fd_dir, pathname ptr[in, filename], mode flags[open_mode])
faccessat2(dirfd fd_dir, pathname ptr[in, filename], mode flags[open_mode], flags flags[faccessat_flags])
utime(filename ptr[in, filename], times ptr[in, utimbuf])
utimes(filename ptr[in, filename], times ptr[in, itimerval])
futimesat(dir fd_dir, pathname ptr[in, filename], times ptr[in, itimerval])
utimensat(dir fd_dir, pathname ptr[in, filename], times ptr[in, itimerval], flags flags[utimensat_flags])

# Small trick - syzkaller cannot give the proper stack pointer to clone(), but we can do it with the aid of pseudo syscalls.
syz_clone(flags flags[clone_flags], stack buffer[in], stack_len bytesize[stack], parentid ptr[out, int32], childtid ptr[out, int32], tls buffer[in]) pid (automatic_helper)

syz_clone3(args ptr[in, clone_args], size bytesize[args]) pid (automatic_helper)

# We need these disabled definitions to simplify the presence and the NR checking.
clone(flags flags[clone_flags], sp buffer[in], parentid ptr[out, int32], childtid ptr[out, int32], tls buffer[in]) (breaks_returns, disabled)
clone3(args ptr[in, clone_args], size bytesize[args]) pid (breaks_returns, disabled)

clone_args {
	flags		flags[clone3_flags, int64]
	pidfd		ptr64[out, fd_pidfd]
	child_tid	ptr64[out, pid]
	parent_tid	ptr64[out, pid]
	exit_signal	align64[signalno]
	stack		ptr64[out, array[int8]]
	stack_size	bytesize[stack, int64]
	tls		ptr64[out, array[int8]]
	set_tid		ptr64[in, array[pid]]
	set_tid_size	len[set_tid, int64]
	cgroup		align64[fd_cgroup]
}

resource pid[int32]: 0, -1
resource uid[int32]: 0, -1, 0xee00, 0xee01
resource gid[int32]: 0, -1, 0xee00, 0xee01

getgid() gid (automatic_helper)
getegid() gid (automatic_helper)
setuid(uid uid)
setgid(gid gid)
getuid() uid (automatic_helper)
geteuid() uid (automatic_helper)
setpgid(pid pid, pgid pid)
getpgid(pid pid) pid (automatic_helper)
getpgrp(pid pid) pid (automatic_helper)
getpid() pid (automatic_helper)
gettid() pid (automatic_helper)
setreuid(ruid uid, euid uid)
setregid(rgid gid, egid gid)
setresuid(ruid uid, euid uid, suid uid)
setresgid(rgid gid, egid gid, sgid gid)
getresuid(ruid ptr[out, uid], euid ptr[out, uid], suid ptr[out, uid])
getresgid(rgid ptr[out, gid], egid ptr[out, gid], sgid ptr[out, gid])
setfsuid(fsuid uid)
setfsgid(fsgid gid)
getgroups(size len[list], list ptr[inout, array[gid]])
setgroups(size len[list], list ptr[in, array[gid]])
personality(persona flags[personality_flags])
# Don't mess with parent (fuzzer). If we ptrace attach to it, it will hang.
# If we setrlimit for parent, it will misbehave. Killing - the same. Nothing good.
#getppid() pid
#getsid(pid pid) pid
#setsid() pid

link(old ptr[in, filename], new ptr[in, filename])
linkat(oldfd fd_dir, old ptr[in, filename], newfd fd_dir, new ptr[in, filename], flags flags[linkat_flags])
symlinkat(old ptr[in, filename], newfd fd_dir, new ptr[in, filename])
symlink(old ptr[in, filename], new ptr[in, filename])
unlink(path ptr[in, filename])
unlinkat(fd fd_dir, path ptr[in, filename], flags flags[unlinkat_flags])
readlink(path ptr[in, filename], buf buffer[out], siz len[buf])
readlinkat(fd fd_dir, path ptr[in, filename], buf buffer[out], siz len[buf])
rename(old ptr[in, filename], new ptr[in, filename])
renameat(oldfd fd_dir, old ptr[in, filename], newfd fd_dir, new ptr[in, filename])
renameat2(oldfd fd_dir, old ptr[in, filename], newfd fd_dir, new ptr[in, filename], flags flags[renameat2_flags])
mkdir(path ptr[in, filename], mode flags[open_mode])
mkdirat(fd fd_dir[opt], path ptr[in, filename], mode flags[open_mode])
rmdir(path ptr[in, filename])
truncate(file ptr[in, filename], len intptr)
ftruncate(fd fd, len intptr)
flock(fd fd, op flags[flock_op])
fsync(fd fd)
fdatasync(fd fd)
sync()
syncfs(fd fd)
sync_file_range(fd fd, off intptr, nbytes intptr, flags flags[sync_file_flags])
lookup_dcookie(cookie intptr, buf buffer[out], len len[buf])
getdents(fd fd_dir, ent buffer[out], count len[ent])
getdents64(fd fd_dir, ent buffer[out], count len[ent])
name_to_handle_at(fd fd_dir, file ptr[in, filename], handle ptr[in, file_handle], mnt ptr[out, int32], flags flags[name_to_handle_at_flags])
open_by_handle_at(mountdirfd fd, handle ptr[in, file_handle], flags flags[open_flags])
chroot(dir ptr[in, filename])
getcwd(buf buffer[out], size len[buf])
chdir(dir ptr[in, filename])
fchdir(fd fd)
pivot_root(new_root ptr[in, filename], put_old ptr[in, filename])

sysfs$1(option const[1], fsname ptr[in, string])
sysfs$2(option const[2], fsindex intptr, fsname buffer[out])
sysfs$3(option const[3])
statfs(path ptr[in, filename], buf buffer[out])
fstatfs(fd fd, buf buffer[out])

uselib(lib ptr_binfmt_file)
init_module(mod ptr[in, binfmt_elf64], len len[mod], args ptr[in, string])
finit_module(fd fd_binfmt, args ptr[in, string], flags flags[finit_module_flags])
delete_module(name ptr[in, string], flags flags[delete_module_flags])
kexec_load(entry intptr, nr_segments len[segments], segments ptr[in, array[kexec_segment]], flags flags[kexec_load_flags])
syslog(cmd flags[syslog_cmd], buf ptr[out, array[int8], opt], len len[buf])
uname(buf buffer[out])
sysinfo(info buffer[out])
ustat(dev intptr, buf ptr[out, ustat])
acct(filename ptr[in, filename, opt])

getrusage(who flags[rusage_who], usage ptr[out, rusage])
getrlimit(res flags[rlimit_type], rlim ptr[out, rlimit])
setrlimit(res flags[rlimit_type], rlim ptr[in, rlimit])
prlimit64(pid pid, res flags[rlimit_type], new ptr[in, rlimit, opt], old ptr[out, rlimit, opt])

iopl(level int8)
ioperm(from intptr, num intptr, on intptr)
ioprio_get$pid(which flags[ioprio_which_pid], who pid)
ioprio_get$uid(which flags[ioprio_which_uid], who uid)
ioprio_set$pid(which flags[ioprio_which_pid], who pid, ioprio flags[ioprio_priorities])
ioprio_set$uid(which flags[ioprio_which_uid], who uid, ioprio flags[ioprio_priorities])

# NEED: we can express the construction of integers using structs with flags
# and bitfields, which are normally obtained using a combination of bitwise
# operations with the help of macros. However, structs can't be directly passed
# as a syscall argument; therefore, such constructions can't be directly passed either.
# One example is ioprio argument for ioprio_set, where we could have expressed the
# construction less messy. For now, some subset of valid values are enumerated as
# in ioprio_priorities.

# ioprio priority values are obtained using IOPRIO_PRIO_VALUE(class, data) macro.
# the behaviour of the macro is mimicked below and some priority values are enumerated.
ioprio_priorities = IOPRIO_CLASS_NONE_PRIO, IOPRIO_CLASS_IDLE_PRIO, IOPRIO_CLASS_RT_HIGH_PRIO, IOPRIO_CLASS_BE_HIGH_PRIO, IOPRIO_CLASS_RT_MID_PRIO, IOPRIO_CLASS_BE_MID_PRIO, IOPRIO_CLASS_RT_LOW_PRIO, IOPRIO_CLASS_BE_LOW_PRIO

# The classes RT and BE take data values from the range [0:7]. Use a subset of the available
# values (0, 4, 7) to ease collisions, and to avoid redundancy. The classes NONE and IDLE
# have only one valid priority value each with data set to 0.
define IOPRIO_CLASS_NONE_PRIO	(IOPRIO_CLASS_NONE << IOPRIO_CLASS_SHIFT)
define IOPRIO_CLASS_IDLE_PRIO	(IOPRIO_CLASS_IDLE << IOPRIO_CLASS_SHIFT)
define IOPRIO_CLASS_RT_HIGH_PRIO	(IOPRIO_CLASS_RT << IOPRIO_CLASS_SHIFT)
define IOPRIO_CLASS_BE_HIGH_PRIO	(IOPRIO_CLASS_BE << IOPRIO_CLASS_SHIFT)
define IOPRIO_CLASS_RT_MID_PRIO	((IOPRIO_CLASS_RT << IOPRIO_CLASS_SHIFT) + 4)
define IOPRIO_CLASS_BE_MID_PRIO	((IOPRIO_CLASS_BE << IOPRIO_CLASS_SHIFT) + 4)
define IOPRIO_CLASS_RT_LOW_PRIO	((IOPRIO_CLASS_RT << IOPRIO_CLASS_SHIFT) + 7)
define IOPRIO_CLASS_BE_LOW_PRIO	((IOPRIO_CLASS_BE << IOPRIO_CLASS_SHIFT) + 7)

resource timerid[int32]
timer_create(id flags[clock_id], ev ptr[in, sigevent], timerid ptr[out, timerid])
timer_gettime(timerid timerid, setting ptr[out, itimerspec])
timer_getoverrun(timerid timerid)
timer_settime(timerid timerid, flags flags[timer_flags], new ptr[in, itimerspec], old ptr[out, itimerspec, opt])
timer_delete(timerid timerid)

time(t ptr[out, intptr])
clock_gettime(id flags[clock_id], tp ptr[out, timespec])
clock_settime(id flags[clock_id], tp ptr[in, timespec])
clock_adjtime(id flags[clock_id], tx ptr[in, timex])
clock_getres(id flags[clock_id], tp ptr[out, timespec])
clock_nanosleep(id flags[clock_id], flags flags[timer_flags], rqtp ptr[in, timespec], rmtp ptr[out, timespec, opt])
rt_sigaction(sig signalno, act ptr[in, sigaction], oact ptr[out, sigaction, opt], sigsetsize len[fake], fake ptr[out, sigset_t])
rt_sigprocmask(how flags[sigprocmask_how], nset ptr[in, sigset_t], oset ptr[out, sigset_t, opt], sigsetsize len[nset])
rt_sigreturn()
rt_sigpending(set ptr[out, sigset_t], sigsetsize len[set])
rt_sigtimedwait(these ptr[in, sigset_t], info ptr[out, siginfo, opt], ts ptr[in, timespec], sigsetsize len[these])
rt_sigsuspend(new ptr[in, sigset_t], sigsetsize len[new])
rt_sigqueueinfo(pid pid, sig signalno, info ptr[in, siginfo])
rt_tgsigqueueinfo(gid pid, tid pid, sig signalno, info ptr[in, siginfo])
sigaltstack(ss ptr[in, sigaltstack], oss ptr[out, sigaltstack, opt])
tgkill(gid pid, tid pid, sig signalno)
tkill(tid pid, sig signalno)
pause()
alarm(seconds intptr)
nanosleep(req ptr[in, timespec], rem ptr[out, timespec, opt])
getitimer(which flags[getitimer_which], cur ptr[out, itimerval])
setitimer(which flags[getitimer_which], new ptr[in, itimerval], old ptr[out, itimerval, opt])
exit(code intptr)
exit_group(code intptr)
waitid(which flags[waitid_which], pid pid, infop ptr[out, siginfo, opt], options flags[wait_options], ru ptr[out, rusage, opt])
waitid$P_PIDFD(which const[P_PIDFD], pidfd fd_pidfd, infop ptr[out, siginfo, opt], options flags[wait_options], ru ptr[out, rusage, opt])
wait4(pid pid, status ptr[out, int32, opt], options flags[wait_options], ru ptr[out, rusage, opt])
times(buf ptr[out, tms])
# Can send signals to all processes (pid=-1) so it's enabled only in snapshot mode.
kill(pid pid, sig signalno) (snapshot)

set_thread_area(info ptr[in, user_desc])
get_thread_area(info ptr[in, user_desc])
modify_ldt$read(func const[0], buf buffer[out], len len[buf])
modify_ldt$write(func const[1], buf ptr[in, user_desc], len len[buf])
modify_ldt$read_default(func const[2], buf buffer[out], len len[buf])
modify_ldt$write2(func const[17], buf ptr[in, user_desc], len len[buf])
process_vm_readv(pid pid, loc_vec ptr[in, array[iovec_out]], loc_vlen len[loc_vec], rem_vec ptr[in, array[iovec_out]], rem_vlen len[rem_vec], flags const[0])
process_vm_writev(pid pid, loc_vec ptr[in, array[iovec_out]], loc_vlen len[loc_vec], rem_vec ptr[in, array[iovec_out]], rem_vlen len[rem_vec], flags const[0])
set_tid_address(tidptr ptr[out, int32])
getpriority(which flags[priority_which], who pid)
setpriority(which flags[priority_which], who pid, prio intptr)
sched_getscheduler(pid pid)
sched_setscheduler(pid pid, policy flags[sched_policy], prio ptr[in, int32])
sched_rr_get_interval(pid pid, tp ptr[out, timespec])
sched_getparam(pid pid, prio ptr[out, int32])
sched_setparam(pid pid, prio ptr[in, int32])
sched_getaffinity(pid pid, cpusetsize len[mask], mask ptr[out, int64])
sched_setaffinity(pid pid, cpusetsize len[mask], mask ptr[in, int64])
sched_getattr(pid pid, attr ptr[out, sched_attr], size len[attr], flags const[0])
sched_setattr(pid pid, attr ptr[in, sched_attr], flags const[0])
sched_yield()
getrandom(buf buffer[out], len len[buf], flags flags[getrandom_flags])

reboot(magic1 const[LINUX_REBOOT_MAGIC1], magic2 const[LINUX_REBOOT_MAGIC2], cmd flags[reboot_commands], arg const[0]) (snapshot)

reboot_commands = LINUX_REBOOT_CMD_RESTART, LINUX_REBOOT_CMD_HALT, LINUX_REBOOT_CMD_CAD_ON, LINUX_REBOOT_CMD_CAD_OFF, LINUX_REBOOT_CMD_POWER_OFF, LINUX_REBOOT_CMD_RESTART2, LINUX_REBOOT_CMD_SW_SUSPEND, LINUX_REBOOT_CMD_KEXEC

swapon(specialfile ptr[in, filename], swap_flags flags[swap_flags]) (snapshot)
swapoff(specialfile ptr[in, filename]) (snapshot)

swap_flags = SWAP_FLAG_PRIO_MASK, SWAP_FLAG_PREFER, SWAP_FLAG_DISCARD, SWAP_FLAG_DISCARD_ONCE, SWAP_FLAG_DISCARD_PAGES

sethostname(name ptr[in, array[int8]], len bytesize[name]) (snapshot)
setdomainname(name ptr[in, array[int8]], len bytesize[name]) (snapshot)

gettimeofday(tv ptr[out, timeval], tz ptr[out, timezone])
settimeofday(tv ptr[out, timeval], tz ptr[out, timezone]) (snapshot)
adjtimex(txc_p ptr[in, __kernel_timex]) (snapshot)

timezone {
	tz_minuteswest	int32
	tz_dsttime	int32
}

__kernel_timex {
	modes		flags[adjtimex_modes, int32]
	pad0		const[0, int32]
	offset		int64
	freq		int64
	maxerror	int64
	esterror	int64
	status		int32
	pad1		const[0, int32]
	constant	int64
	precision	int64
	tolerance	int64
	int		timeval
	tick		int64
	ppsfreq		int64
	jitter		int64
	shift		int32
	pad2		const[0, int32]
	stabil		int64
	jitcnt		int64
	calcnt		int64
	errcnt		int64
	stbcnt		int64
	tai		int32
	pad3		array[const[0, int32], 11]
}

adjtimex_modes = ADJ_OFFSET, ADJ_FREQUENCY, ADJ_MAXERROR, ADJ_ESTERROR, ADJ_STATUS, ADJ_TIMECONST, ADJ_TAI, ADJ_SETOFFSET, ADJ_MICRO, ADJ_NANO, ADJ_TICK

membarrier(cmd flags[membarrier_cmd], flags const[0])

membarrier_cmd = MEMBARRIER_CMD_GLOBAL, MEMBARRIER_CMD_GLOBAL_EXPEDITED, MEMBARRIER_CMD_PRIVATE_EXPEDITED, MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED, MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE, MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE, MEMBARRIER_CMD_QUERY, MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED, MEMBARRIER_CMD_SHARED

rseq(rseq ptr[in, rseq], rseq_len bytesize[rseq], flags boolptr, sig const[0])

rseq {
	cpu_id_start	const[0, int32]
	cpu_id		const[0, int32]
	rseq_cs		ptr64[in, rseq_cs, opt]
	flags		flags[rseq_cs_flags, int32]
} [align[32]]

rseq_cs {
	version			const[0, int32]
	flags			flags[rseq_cs_flags, int32]
	start_ip		int64
	post_commit_offset	int64
	abort_ip		int64
} [align[32]]

rseq_cs_flags = RSEQ_CS_FLAG_NO_RESTART_ON_PREEMPT, RSEQ_CS_FLAG_NO_RESTART_ON_SIGNAL, RSEQ_CS_FLAG_NO_RESTART_ON_MIGRATE

# start/end are unused for now, no definition of SYS_RISCV_FLUSH_ICACHE_LOCAL in uapi headers.
riscv_flush_icache(start const[0], end const[0], flags bool32)

syz_open_procfs(pid pid, file ptr[in, string[procfs_proc_file]]) fd

# TODO: some of net files are only available in the init namespace (e.g. bluetooth bnep and hci).
# We could find some way to open these files in the init namespace
# esp. taking into account that we create bluetooth sockets in init namespace.
procfs_proc_file = "auxv", "cmdline", "environ", "autogroup", "cgroup", "clear_refs", "comm", "coredump_filter", "cpuset", "gid_map", "io", "limits", "loginuid", "maps", "mountinfo", "mounts", "mountstats", "numa_maps", "oom_adj", "oom_score", "oom_score_adj", "pagemap", "personality", "projid_map", "sched", "schedstat", "sessionid", "setgroups", "smaps", "smaps_rollup", "totmaps", "stack", "stat", "statm", "status", "syscall", "timers", "uid_map", "wchan", "map_files", "attr", "attr/current", "attr/exec", "attr/fscreate", "attr/keycreate", "attr/prev", "attr/sockcreate", "ns", "children", "task", "fd", "fd/3", "fd/4", "fdinfo", "fdinfo/3", "fdinfo/4", "net", "net/anycast6", "net/arp", "net/bnep", "net/connector", "net/dev", "net/dev_mcast", "net/dev_snmp6", "net/fib_trie", "net/fib_triestat", "net/hci", "net/icmp", "net/icmp6", "net/if_inet6", "net/igmp", "net/igmp6", "net/ip6_flowlabel", "net/ip6_mr_cache", "net/ip6_mr_vif", "net/ip6_tables_matches", "net/ip6_tables_names", "net/ip6_tables_targets", "net/ip_mr_cache", "net/ip_mr_vif", "net/ip_tables_matches", "net/ip_tables_names", "net/ip_tables_targets", "net/ipv6_route", "net/l2cap", "net/llc/core", "net/llc/socket", "net/mcfilter", "net/mcfilter6", "net/netfilter", "net/netlink", "net/netstat", "net/nfsfs", "net/packet", "net/protocols", "net/psched", "net/ptype", "net/raw", "net/raw6", "net/rfcomm", "net/route", "net/rpc", "net/rt6_stats", "net/rt_acct", "net/rt_cache", "net/sco", "net/sctp", "net/snmp", "net/snmp6", "net/sockstat", "net/sockstat6", "net/softnet_stat", "net/stat", "net/tcp", "net/tcp6", "net/udp", "net/udp6", "net/udplite", "net/udplite6", "net/unix", "net/wireless", "net/xfrm_stat", "net/ip_vs", "net/ip_vs_stats", "net/ip_vs_stats_percpu", "net/nf_conntrack", "net/nf_conntrack_expect", "net/vlan/config", "net/vlan/vlan0", "net/vlan/vlan1", "net/kcm", "net/psched", "timerslack_ns"

openat$procfs(fd const[AT_FDCWD], file ptr[in, string[procfs_file]], flags const[O_RDONLY], mode const[0]) fd

procfs_file = "/proc/keys", "/proc/key-users", "/proc/crypto", "/proc/consoles", "/proc/cgroups", "/proc/zoneinfo", "/proc/vmstat", "/proc/vmallocinfo", "/proc/tty/drivers", "/proc/tty/ldiscs", "/proc/timer_list", "/proc/sysvipc/sem", "/proc/sysvipc/msg", "/proc/sysvipc/shm", "/proc/stat", "/proc/slabinfo", "/proc/schedstat", "/proc/partitions", "/proc/meminfo", "/proc/mdstat", "/proc/locks", "/proc/diskstats", "/proc/cpuinfo", "/proc/consoles", "/proc/bus/input/devices", "/proc/bus/input/handlers", "/proc/asound/seq/clients", "/proc/asound/seq/clients", "/proc/asound/seq/timer", "/proc/asound/timers"

resource fd_yama_ptrace_scope[fd]

# 0 - YAMA_SCOPE_DISABLED
# 1 - YAMA_SCOPE_RELATIONAL
# 2 - YAMA_SCOPE_CAPABILITY
# 3 - YAMA_SCOPE_NO_ATTACH
yama_modes = "0", "1", "2", "3"

openat$yama_ptrace_scope(fd const[AT_FDCWD], file ptr[in, string["/proc/sys/kernel/yama/ptrace_scope"]], flags const[O_RDWR], mode const[0]) fd_yama_ptrace_scope
write$yama_ptrace_scope(fd fd_yama_ptrace_scope, data ptr[in, string[yama_modes]], count len[data])

# Write into some interesting sysct's and sysfs/procfs files.
resource fd_sysctl[fd]

openat$sysctl(fd const[AT_FDCWD], file ptr[in, string[sysctl_file]], flags const[O_WRONLY], mode const[0]) fd_sysctl
write$sysctl(fd fd_sysctl, val ptr[in, string[sysctl_value]], len len[val])

sysctl_file = "/sys/kernel/mm/ksm/run", "/proc/sys/vm/compact_memory", "/proc/sys/vm/drop_caches", "/proc/sys/net/ipv4/tcp_timestamps", "/proc/sys/net/ipv4/tcp_sack", "/proc/sys/net/ipv4/tcp_dsack", "/proc/sys/net/ipv4/tcp_window_scaling", "/proc/sys/net/ipv4/tcp_syncookies", "/proc/sys/net/ipv4/tcp_recovery", "/proc/sys/net/ipv4/tcp_mtu_probing", "/proc/sys/net/ipv4/tcp_rfc1337", "/proc/self/clear_refs"
# Most of these values are suitable for all sysctl_file files.
sysctl_value = "0", "1", "2", "3", "4", "5", "6", "7"

# Write to this file triggers khugepaged scan.
# We don't want to write small values as we only want the explicitly triggered scan.
resource fd_khugepaged_scan[fd]

openat$khugepaged_scan(fd const[AT_FDCWD], file ptr[in, string["/sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs"]], flags const[O_WRONLY], mode const[0]) fd_khugepaged_scan
write$khugepaged_scan(fd fd_khugepaged_scan, val ptr[in, string["1000000"]], len len[val])

resource fd_tcp_congestion[fd]

openat$tcp_congestion(fd const[AT_FDCWD], file ptr[in, string["/proc/sys/net/ipv4/tcp_congestion_control"]], flags const[O_WRONLY], mode const[0]) fd_tcp_congestion
write$tcp_congestion(fd fd_tcp_congestion, val ptr[in, string[tcp_congestion]], len len[val])

tcp_congestion = "reno", "bbr", "bic", "cdg", "cubic", "dctcp", "westwood", "highspeed", "hybla", "htcp", "vegas", "nv", "veno", "scalable", "lp", "yeah", "illinois"

resource fd_tcp_mem[fd]

openat$tcp_mem(fd const[AT_FDCWD], file ptr[in, string[tcp_mem_files]], flags const[O_WRONLY], mode const[0]) fd_tcp_mem
write$tcp_mem(fd fd_tcp_mem, val ptr[in, tcp_mem_values], len len[val])

tcp_mem_files = "/proc/sys/net/ipv4/tcp_rmem", "/proc/sys/net/ipv4/tcp_wmem"

tcp_mem_values {
	v0	fmt[oct, int64]
	sp0	const[' ', int8]
	v1	fmt[oct, int64]
	sp1	const[' ', int8]
	v2	fmt[oct, int64]
	z	const[0, int8]
} [packed]

# /proc/self/reclaim is ChromeOS-specific:
# https://chromium.googlesource.com/chromiumos/third_party/kernel/+/4c3ad28b9c913%5E%21/
resource fd_proc_reclaim[fd]

openat$proc_reclaim(fd const[AT_FDCWD], file ptr[in, string["/proc/self/reclaim"]], flags const[O_WRONLY], mode const[0]) fd_proc_reclaim
write$proc_reclaim(fd fd_proc_reclaim, val ptr[in, string[proc_reclaim_vals]], len len[val])

proc_reclaim_vals = "file", "anon", "all"

resource fd_pidfd[fd]

openat$pidfd(fd const[AT_FDCWD], file ptr[in, string["/proc/self"]], flags flags[open_flags], mode const[0]) fd_pidfd
openat$thread_pidfd(fd const[AT_FDCWD], file ptr[in, string["/proc/thread-self"]], flags flags[open_flags], mode const[0]) fd_pidfd
pidfd_send_signal(fd fd_pidfd, sig signalno, info ptr[in, siginfo], flags const[0])

# pidfd_open is dangerous, so we use syz_pidfd_open instead.
pidfd_open(pid pid, flags const[0]) fd_pidfd (disabled)
syz_pidfd_open(pid pid, flags const[0]) fd_pidfd
pidfd_getfd(pidfd fd_pidfd, fd fd, flags const[0]) fd

close_range(fd fd, max_fd fd, flags flags[close_range_flags])

# Uncomment on your own account.
#syz_open_dev$char(dev const[0xc], major intptr, minor intptr) fd
#syz_open_dev$block(dev const[0xb], major intptr, minor intptr) fd

# /dev/console known to cause problems on at least two different kernels.
# It can turn off all output or produce massive amounts of garbage on console.
# Disable it for now.
#syz_open_dev$console(dev ptr[in, string["/dev/console"]], id const[0], flags flags[open_flags]) fd

resource fd_autofs[fd]

# These devices are relatively safe (don't reboot and don't corrupt kernel memory).
# They need a more comprehensive support. But let at least open them for now,
# maybe fuzzer will be able to skrew them in a useful way.
# TODO: all these devices returning just fd need proper interface descriptions.
openat$vcs(fd const[AT_FDCWD], file ptr[in, string["/dev/vcs"]], flags flags[open_flags], mode const[0]) fd
syz_open_dev$vcsn(dev ptr[in, string["/dev/vcs#"]], id intptr, flags flags[open_flags]) fd
openat$vcsa(fd const[AT_FDCWD], file ptr[in, string["/dev/vcsa"]], flags flags[open_flags], mode const[0]) fd
syz_open_dev$vcsa(dev ptr[in, string["/dev/vcsa#"]], id intptr, flags flags[open_flags]) fd
openat$vcsu(fd const[AT_FDCWD], file ptr[in, string["/dev/vcsu"]], flags flags[open_flags], mode const[0]) fd
syz_open_dev$vcsu(dev ptr[in, string["/dev/vcsu#"]], id intptr, flags flags[open_flags]) fd
syz_open_dev$ircomm(dev ptr[in, string["/dev/ircomm#"]], id intptr, flags flags[open_flags]) fd
syz_open_dev$audion(dev ptr[in, string["/dev/audio#"]], id intptr, flags flags[open_flags]) fd

openat$null(fd const[AT_FDCWD], file ptr[in, string["/dev/null"]], flags flags[open_flags], mode const[0]) fd
openat$zero(fd const[AT_FDCWD], file ptr[in, string["/dev/zero"]], flags flags[open_flags], mode const[0]) fd
openat$full(fd const[AT_FDCWD], file ptr[in, string["/dev/full"]], flags flags[open_flags], mode const[0]) fd
openat$irnet(fd const[AT_FDCWD], file ptr[in, string["/dev/irnet"]], flags flags[open_flags], mode const[0]) fd
openat$hwrng(fd const[AT_FDCWD], file ptr[in, string["/dev/hwrng"]], flags flags[open_flags], mode const[0]) fd
openat$hpet(fd const[AT_FDCWD], file ptr[in, string["/dev/hpet"]], flags flags[open_flags], mode const[0]) fd
openat$autofs(fd const[AT_FDCWD], file ptr[in, string["/dev/autofs"]], flags flags[open_flags], mode const[0]) fd_autofs
openat$keychord(fd const[AT_FDCWD], file ptr[in, string["/dev/keychord"]], flags flags[open_flags], mode const[0]) fd
openat$zygote(fd const[AT_FDCWD], file ptr[in, string["/dev/socket/zygote"]], flags flags[open_flags], mode const[0]) fd
openat$pktcdvd(fd const[AT_FDCWD], file ptr[in, string["/dev/pktcdvd/control"]], flags flags[open_flags], mode const[0]) fd
openat$lightnvm(fd const[AT_FDCWD], file ptr[in, string["/dev/lightnvm/control"]], flags flags[open_flags], mode const[0]) fd
openat$xenevtchn(fd const[AT_FDCWD], file ptr[in, string["/dev/xen/evtchn"]], flags flags[open_flags], mode const[0]) fd
openat$dlm_control(fd const[AT_FDCWD], file ptr[in, string["/dev/dlm-control"]], flags flags[open_flags], mode const[0]) fd
openat$dlm_monitor(fd const[AT_FDCWD], file ptr[in, string["/dev/dlm-monitor"]], flags flags[open_flags], mode const[0]) fd
openat$dlm_plock(fd const[AT_FDCWD], file ptr[in, string["/dev/dlm_plock"]], flags flags[open_flags], mode const[0]) fd
openat$btrfs_control(fd const[AT_FDCWD], file ptr[in, string["/dev/btrfs-control"]], flags flags[open_flags], mode const[0]) fd
openat$ubi_ctrl(fd const[AT_FDCWD], file ptr[in, string["/dev/ubi_ctrl"]], flags flags[open_flags], mode const[0]) fd
openat$cachefiles(fd const[AT_FDCWD], file ptr[in, string["/dev/cachefiles"]], flags flags[open_flags], mode const[0]) fd
openat$ndctl0(fd const[AT_FDCWD], file ptr[in, string["/dev/ndctl0"]], flags flags[open_flags], mode const[0]) fd
openat$nmem0(fd const[AT_FDCWD], file ptr[in, string["/dev/nmem0"]], flags flags[open_flags], mode const[0]) fd
openat$nvram(fd const[AT_FDCWD], file ptr[in, string["/dev/nvram"]], flags flags[open_flags], mode const[0]) fd
openat$ocfs2_control(fd const[AT_FDCWD], file ptr[in, string["/dev/ocfs2_control"]], flags flags[open_flags], mode const[0]) fd
openat$nvme_fabrics(fd const[AT_FDCWD], file ptr[in, string["/dev/nvme-fabrics"]], flags flags[open_flags], mode const[0]) fd

pipefd {
	rfd	fd
	wfd	fd
}

type iovec[DIR, T] {
	addr	ptr[DIR, T]
	len	len[addr, intptr]
}
type iovec_in iovec[in, array[int8]]
type iovec_out iovec[out, array[int8]]
type iovec_inout iovec[inout, array[int8]]

stat {
	st_dev		intptr
	st_ino		intptr
	st_mode		int32
	st_nlink	int32
	st_uid		uid
	st_gid		gid
	st_rdev		intptr
	__pad1		const[0, intptr]
	st_size		intptr
	st_blksize	int32
	__pad2		const[0, int32]
	st_blocks	intptr
	st_atime	intptr
	st_atime_nsec	intptr
	st_mtime	intptr
	st_mtime_nsec	intptr
	st_ctime	intptr
	st_ctime_nsec	intptr
	__unused4	const[0, int32]
	__unused5	const[0, int32]
}

statx {
	mask		int32
	blksize		int32
	attributes	int64
	nlink		int32
	uid		uid
	gid		gid
	mode		int16
	__spare0	int16
	ino		int64
	size		int64
	blocks		int64
	__spare1	int64
	atime		statx_timestamp
	btime		statx_timestamp
	ctime		statx_timestamp
	mtime		statx_timestamp
	rdev_major	int32
	rdev_minor	int32
	dev_major	int32
	dev_minor	int32
	__spare2	array[int64, 14]
}

define STAT64_SIZE	sizeof(struct stat64)

type stat64 array[int8, STAT64_SIZE]

pollfd {
	fd	fd
	events	flags[pollfd_events, int16]
	revents	const[0, int16]
}

sigset_t {
	mask	array[intptr, _NSIG_WORDS]
}

sigset_size {
	ss	ptr[inout, sigset_t]
	len	len[ss, intptr]
}

resource time_sec[intptr]
resource time_nsec[intptr]
resource time_usec[intptr]

# prog knowns about this struct type
timespec {
	sec	time_sec
	nsec	time_nsec
}

# prog knowns about this struct type
timeval {
	sec	time_sec
	usec	time_usec
}

statx_timestamp {
	sec		int64
	nsec		int32
	__reserved	int32
}

itimerspec {
	interv	timespec
	value	timespec
}

itimerval {
	interv	timeval
	value	timeval
}

utimbuf {
	actime	intptr
	modtime	intptr
}

sigevent {
	val	const[0, intptr]
	signo	signalno
	notify	flags[sigev_notify, int32]
	u	sigevent_u
} [size[SIGEVENT_SIZE]]

sigevent_u [
	tid	pid
	thr	sigevent_thread
]

define SIGEVENT_SIZE	sizeof(struct sigevent)

sigevent_thread {
# NEED: this is function pointer and pthread_attr_t (?)
	func	buffer[in]
	attr	buffer[in]
}

cap_header {
	var	flags[cap_version, int32]
	pid	pid
}

cap_data {
	eff0	int32
	perm0	int32
	inher0	int32
	eff1	int32
	perm1	int32
	inher1	int32
}

epoll_event {
	ev	flags[epoll_ev, int32]
	data	const[0, int64]
# TODO: this packed only on amd64
} [packed]

# TODO: fd_set needs to be a separate type
fd_set {
	mask0	int64
	mask1	int64
	mask2	int64
	mask3	int64
	mask4	int64
	mask5	int64
	mask6	int64
	mask7	int64
}

sock_fprog {
	len	len[filter, int16]
	filter	ptr[in, array[sock_filter]]
}

sock_filter {
	code	int16
	jt	int8
	jf	int8
	k	int32
}

file_handle [
	raw				file_handle_raw
	shmem				file_handle_t[1, fid_shmem]
	fuse				file_handle_t[0x81, fid_fuse]
	fuse_with_parent		file_handle_t[0x82, fid_fuse_with_parent]
	nfs				file_handle_t[36, fid_nfs]
	FILEID_INO32_GEN		file_handle_t[FILEID_INO32_GEN, fid_FILEID_INO32_GEN]
	FILEID_INO32_GEN_PARENT		file_handle_t[FILEID_INO32_GEN_PARENT, fid_FILEID_INO32_GEN_PARENT]
	FILEID_UDF_WITHOUT_PARENT	file_handle_t[FILEID_UDF_WITHOUT_PARENT, fid_FILEID_UDF_WITHOUT_PARENT]
	FILEID_UDF_WITH_PARENT		file_handle_t[FILEID_UDF_WITH_PARENT, fid_FILEID_UDF_WITH_PARENT]
	FILEID_BTRFS_WITH_PARENT	file_handle_t[FILEID_BTRFS_WITH_PARENT, btrfs_fid]
	FILEID_BTRFS_WITH_PARENT_ROOT	file_handle_t[FILEID_BTRFS_WITH_PARENT, btrfs_fid]
	FILEID_BTRFS_WITHOUT_PARENT	file_handle_t[FILEID_BTRFS_WITH_PARENT, btrfs_fid]
	ceph_nfs_snapfh			file_handle_t[FILEID_BTRFS_WITH_PARENT, ceph_nfs_snapfh]
	ceph_nfs_fh			file_handle_t[FILEID_INO32_GEN, ceph_nfs_fh]
	ceph_nfs_confh			file_handle_t[FILEID_INO32_GEN_PARENT, ceph_nfs_confh]
	GFS2_SMALL_FH_SIZE		file_handle_t[GFS2_SMALL_FH_SIZE, fid_GFS2_SMALL_FH_SIZE]
	GFS2_LARGE_FH_SIZE		file_handle_t[GFS2_LARGE_FH_SIZE, fid_GFS2_LARGE_FH_SIZE]
	OVL_FILEID_V0			file_handle_t[OVL_FILEID_V0, ovl_fb]
	OVL_FILEID_V1			file_handle_t[OVL_FILEID_V1, ovl_fh]
	FILEID_NILFS_WITHOUT_PARENT	file_handle_t[FILEID_NILFS_WITHOUT_PARENT, nilfs_fid]
	FILEID_NILFS_WITH_PARENT	file_handle_t[FILEID_NILFS_WITH_PARENT, nilfs_fid]
	reiserfs_2			file_handle_t[2, fid_reiserfs_2]
	reiserfs_3			file_handle_t[3, fid_reiserfs_3]
	reiserfs_4			file_handle_t[4, fid_reiserfs_4]
	reiserfs_5			file_handle_t[5, fid_reiserfs_5]
	reiserfs_6			file_handle_t[6, fid_reiserfs_6]
	xfs				file_handle_t[0x81, xfs_fid64]
	xfs_parent			file_handle_t[0x82, xfs_fid64]
	orangefs			file_handle_t[1, fid_orangefs]
	orangefs_parent			file_handle_t[2, fid_orangefs_parent]
	isofs				file_handle_t[1, isofs_fid]
	isofs_parent			file_handle_t[2, isofs_fid]
	ocfs2				file_handle_t[1, fid_ocfs2]
	ocfs2_parent			file_handle_t[2, fid_ocfs2_parent]
] [varlen]

file_handle_raw {
	handle_bytes	bytesize[f_handle, int32]
	handle_type	flags[fid_type, int32]
	f_handle	align32[array[int8]]
}

type file_handle_t[TYPE, FID] {
	handle_bytes	bytesize[f_handle, int32]
	handle_type	const[TYPE, int32]
	f_handle	FID
}

fid_shmem {
	gen	int32
	ino	int64
} [packed]

fid_fuse {
	nodeid_hi	int32
	nodeid_lo	int32
	generation	int32
}

fid_fuse_with_parent {
	base		fid_fuse
	parent_fid	fid_fuse
} [packed]

fid_FILEID_INO32_GEN {
	ino	int32
	gen	int32
}

fid_FILEID_INO32_GEN_PARENT {
	base		fid_FILEID_INO32_GEN
	parent_ino	int32
	parent_gen	int32
}

fid_FILEID_UDF_WITHOUT_PARENT {
	block		int32
	partref		int16
	parent_partref	int16
	generation	int32
}

fid_FILEID_UDF_WITH_PARENT {
	base			fid_FILEID_UDF_WITHOUT_PARENT
	parent_block		int32
	parent_generation	int32
}

btrfs_fid {
	objectid		int64
	root_objectid		int64
	gen			int32
	parent_objectid		int64
	parent_gen		int32
	parent_root_objectid	int64
} [packed]

ceph_nfs_snapfh {
	ino		int64
	snapid		int64
	parent_ino	int64
	hash		int32
} [packed]

ceph_nfs_fh {
	ino	int64
}

ceph_nfs_confh {
	ino		int64
	parent_ino	int64
}

fid_GFS2_SMALL_FH_SIZE {
	no_formal_ino_hi	int32be
	no_formal_ino_lo	int32be
	no_addr_hi		int32be
	no_addr_lo		int32be
}

fid_GFS2_LARGE_FH_SIZE {
	base		fid_GFS2_SMALL_FH_SIZE
	parent_fid	fid_GFS2_SMALL_FH_SIZE
}

define GFS2_SMALL_FH_SIZE	4
define GFS2_LARGE_FH_SIZE	8

fid_nfs {
	fileid_hi	int32
	fileid_lo	int32
	type		int32
# The total size is encoded in file_handle.handle_type, so we make this const size.
	size		const[NFS_MAXFHSIZE, int16]
	data		array[int8, NFS_MAXFHSIZE]
} [align[4]]

fid_reiserfs_2 {
	objectid	int32
	dir_id		int32
}

fid_reiserfs_3 {
	objectid	int32
	dir_id		int32
	generation	int32
}

fid_reiserfs_4 {
	objectid	int32
	dir_id		int32
	parent_objectid	int32
	parent_dir_id	int32
}

fid_reiserfs_5 {
	objectid	int32
	dir_id		int32
	generation	int32
	parent_objectid	int32
	parent_dir_id	int32
}

fid_reiserfs_6 {
	objectid		int32
	dir_id			int32
	generation		int32
	parent_objectid		int32
	parent_dir_id		int32
	parent_generation	int32
}

xfs_fid64 {
	ino		int64
	gen		int32
	parent_ino	int64
	parent_gen	int32
} [align[4]]

fid_orangefs {
	khandle	array[int8, 16]
	fs_id	int32
}

fid_orangefs_parent {
	base		fid_orangefs
	parent_fid	fid_orangefs
}

nilfs_fid {
	cno		int64
	ino		int64
	gen		int32
	parent_gen	int32
	parent_ino	int64
} [packed]

isofs_fid {
	block			int32
	offset			int16
	parent_offset		int16
	generation		int32
	parent_block		int32
	parent_generation	int32
}

fid_ocfs2 {
	blkno_hi	int32
	blkno_lo	int32
	generation	int32
}

fid_ocfs2_parent {
	base		fid_ocfs2
	parent_fid	fid_ocfs2
}

fid_type = FILEID_ROOT, FILEID_INO32_GEN, FILEID_INO32_GEN_PARENT, FILEID_BTRFS_WITHOUT_PARENT, FILEID_BTRFS_WITH_PARENT, FILEID_BTRFS_WITH_PARENT_ROOT, FILEID_UDF_WITHOUT_PARENT, FILEID_UDF_WITH_PARENT, FILEID_NILFS_WITHOUT_PARENT, FILEID_NILFS_WITH_PARENT, FILEID_FAT_WITHOUT_PARENT, FILEID_FAT_WITH_PARENT, FILEID_LUSTRE, FILEID_KERNFS

mq_attr {
	mq_flags	intptr
	mq_maxmsg	intptr
	mq_msgsize	intptr
	mq_curmsgs	intptr
	__reserved	array[const[0, intptr], 4]
}

kexec_segment {
	buf	buffer[in]
	sz	len[buf, intptr]
# TODO: this is address in kernel
	mem	intptr
	memsz	intptr
}

rusage {
	utime	timeval
	stime	timeval
	maxrss	intptr
	ixrss	intptr
	idrss	intptr
	isrss	intptr
	minflt	intptr
	majflt	intptr
	nswap	intptr
	inblock	intptr
	oublock	intptr
	msgsnd	intptr
	msgrcv	intptr
	signals	intptr
	nvcsw	intptr
	nivcsw	intptr
}

rlimit {
	soft	intptr
	hard	intptr
}

sigaction {
	sa_handler	ptr[in, text[target]]
	sa_flags	flags[sigaction_flags, intptr]
	sa_restorer	ptr[in, text[target]]
	sa_mask		sigset_t
}

sigaltstack {
	ss_sp		ptr[out, array[int8]]
	ss_flags	flags[sigaltstack_flags, int32]
	ss_size		bytesize[ss_sp, intptr]
}

sigaltstack_flags = SS_ONSTACK, SS_DISABLE, SS_AUTODISARM

tms {
	utime	intptr
	stime	intptr
	cutime	intptr
	cstime	intptr
}

siginfo {
	signo	signalno
	errno	int32
	code	int32
} [size[SIGINFO_SIZE]]

define SIGINFO_SIZE	sizeof(struct siginfo)

timex {
	stuff0	intptr
	stuff1	intptr
	stuff2	intptr
	stuff3	intptr
	stuff4	intptr
	stuff5	intptr
	stuff6	intptr
	stuff7	intptr
	stuff8	intptr
	stuff9	intptr
	stuff10	intptr
	stuff11	intptr
	stuff12	intptr
	stuff13	intptr
	stuff14	intptr
	stuff15	intptr
	stuff16	intptr
	stuff17	intptr
	stuff18	intptr
	stuff19	intptr
	stuff20	intptr
	stuff21	intptr
	stuff22	intptr
	stuff23	intptr
	stuff24	intptr
	stuff25	intptr
}

ustat {
	f_tfree		int32
	f_tinode	intptr
	f_fname		array[int8, 6]
	f_fpack		array[int8, 6]
}

user_desc {
	entry_number	int32
# Base should be vma and limit should be len[base]
# But these fields are int32, so we can't use vma.
	base_addr	flags[user_desc_bases, int32]
	limit		flags[user_desc_limits, int32]
	seg_32bit	int32:1
	contents	int32:2
	read_exec_only	int32:1
	limit_in_pages	int32:1
	seg_not_present	int32:1
	useable		int32:1
	lm		int32:1
}

user_desc_bases = 0, 4096, 1048576, 536870912, 536872960, 536875008, -1
user_desc_limits = 0, 1024, 4096, 8192, 16384, -1

sched_attr {
	size		bytesize[parent, int32]
	sched_policy	flags[sched_policy, int32]
	sched_flags	flags[sched_attr_flags, int64]
	sched_nice	int32
	sched_priority	int32
	sched_runtime	int64
	sched_deadline	int64
	sched_period	int64
	sched_util_min	int32
	sched_util_max	int32
}

sched_policy = SCHED_NORMAL, SCHED_FIFO, SCHED_RR, SCHED_BATCH, SCHED_IDLE, SCHED_DEADLINE
sched_attr_flags = SCHED_FLAG_RESET_ON_FORK, SCHED_FLAG_RECLAIM, SCHED_FLAG_DL_OVERRUN, SCHED_FLAG_KEEP_POLICY, SCHED_FLAG_KEEP_PARAMS, SCHED_FLAG_UTIL_CLAMP_MIN, SCHED_FLAG_UTIL_CLAMP_MAX, SCHED_FLAG_SUGOV

flock {
	type	flags[flock_type, int16]
	whence	flags[seek_whence, int16]
	start	intptr
	len	intptr
	pid	pid
}

f_owner_ex {
	type	flags[f_owner_type, int32]
	pid	pid
}

ucred {
	pid	pid
	uid	uid
	gid	gid
}

kcmp_epoll_slot {
	efd	fd_epoll
	tfd	fd
	toff	int32
}

open_how {
	flags	flags[open_flags, int64]
	mode	flags[open_mode, int64]
	resolve	flags[resolve_flags, int64]
}

# Not providing O_WRONLY means O_RDONLY.
open_flags = O_WRONLY, O_RDWR, O_APPEND, FASYNC, O_CLOEXEC, O_CREAT, O_DIRECT, O_DIRECTORY, O_EXCL, O_LARGEFILE, O_NOATIME, O_NOCTTY, O_NOFOLLOW, O_NONBLOCK, O_PATH, O_SYNC, O_TRUNC, __O_TMPFILE
open_mode = S_IRUSR, S_IWUSR, S_IXUSR, S_IRGRP, S_IWGRP, S_IXGRP, S_IROTH, S_IWOTH, S_IXOTH
resolve_flags = RESOLVE_CACHED, RESOLVE_BENEATH, RESOLVE_IN_ROOT, RESOLVE_NO_MAGICLINKS, RESOLVE_NO_SYMLINKS, RESOLVE_NO_XDEV
madvise_flags = MADV_NORMAL, MADV_RANDOM, MADV_SEQUENTIAL, MADV_WILLNEED, MADV_DONTNEED, MADV_REMOVE, MADV_DONTFORK, MADV_DOFORK, MADV_HWPOISON, MADV_SOFT_OFFLINE, MADV_MERGEABLE, MADV_UNMERGEABLE, MADV_HUGEPAGE, MADV_NOHUGEPAGE, MADV_DONTDUMP, MADV_DODUMP, MADV_WIPEONFORK, MADV_KEEPONFORK, MADV_COLD, MADV_PAGEOUT, MADV_POPULATE_READ, MADV_POPULATE_WRITE, MADV_COLLAPSE
fadvise_flags = POSIX_FADV_NORMAL, POSIX_FADV_SEQUENTIAL, POSIX_FADV_RANDOM, POSIX_FADV_NOREUSE, POSIX_FADV_WILLNEED, POSIX_FADV_DONTNEED
move_pages_flags = MPOL_MF_MOVE, MPOL_MF_MOVE_ALL
msync_flags = MS_ASYNC, MS_SYNC, MS_INVALIDATE
mmap_prot = PROT_EXEC, PROT_READ, PROT_WRITE, PROT_SEM, PROT_GROWSDOWN, PROT_GROWSUP
mmap_flags = MAP_SHARED, MAP_PRIVATE, MAP_32BIT, MAP_ANONYMOUS, MAP_DENYWRITE, MAP_EXECUTABLE, MAP_FILE, MAP_FIXED, MAP_GROWSDOWN, MAP_HUGETLB, MAP_LOCKED, MAP_NONBLOCK, MAP_NORESERVE, MAP_POPULATE, MAP_STACK, MAP_UNINITIALIZED, MAP_SHARED_VALIDATE, MAP_SYNC, MAP_FIXED_NOREPLACE
mremap_flags = MREMAP_MAYMOVE, MREMAP_FIXED, MREMAP_DONTUNMAP
mbind_mode = MPOL_DEFAULT, MPOL_BIND, MPOL_INTERLEAVE, MPOL_PREFERRED, MPOL_F_STATIC_NODES, MPOL_F_RELATIVE_NODES
mbind_flags = MPOL_MF_STRICT, MPOL_MF_MOVE, MPOL_MF_MOVE_ALL
pipe_flags = O_NONBLOCK, O_CLOEXEC, O_DIRECT, O_NOTIFICATION_PIPE
mlock_flags = MLOCK_ONFAULT
mlockall_flags = MCL_CURRENT, MCL_FUTURE, MCL_ONFAULT
dup_flags = O_CLOEXEC
splice_flags = SPLICE_F_MOVE, SPLICE_F_NONBLOCK, SPLICE_F_MORE, SPLICE_F_GIFT
seek_whence = SEEK_SET, SEEK_CUR, SEEK_END, SEEK_DATA, SEEK_HOLE
signalfd_flags = SFD_NONBLOCK, SFD_CLOEXEC
eventfd_flags = EFD_CLOEXEC, EFD_NONBLOCK, EFD_SEMAPHORE
timerfd_create_flags = TFD_NONBLOCK, TFD_CLOEXEC
timerfd_settime_flags = TFD_TIMER_ABSTIME, TFD_TIMER_CANCEL_ON_SET
clock_type = CLOCK_REALTIME, CLOCK_REALTIME_COARSE, CLOCK_MONOTONIC, CLOCK_MONOTONIC_COARSE, CLOCK_MONOTONIC_RAW, CLOCK_BOOTTIME, CLOCK_PROCESS_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID, CLOCK_REALTIME_ALARM, CLOCK_BOOTTIME_ALARM
sigev_notify = SIGEV_NONE, SIGEV_SIGNAL, SIGEV_THREAD, SIGEV_THREAD_ID
cap_version = _LINUX_CAPABILITY_VERSION_1, _LINUX_CAPABILITY_VERSION_2, _LINUX_CAPABILITY_VERSION_3
epoll_flags = EPOLL_CLOEXEC
epoll_ev = POLLIN, POLLOUT, POLLRDHUP, POLLPRI, POLLERR, POLLHUP, EPOLLET, EPOLLONESHOT, EPOLLEXCLUSIVE, EPOLLWAKEUP
pollfd_events = POLLIN, POLLPRI, POLLOUT, POLLERR, POLLHUP, POLLNVAL, POLLRDNORM, POLLRDBAND, POLLWRNORM, POLLWRBAND, POLLMSG, POLLREMOVE, POLLRDHUP, POLLFREE, POLL_BUSY_LOOP
mknod_mode = S_IFREG, S_IFCHR, S_IFBLK, S_IFIFO, S_IFSOCK, S_ISUID, S_ISGID, S_ISVTX, open_mode
at_flags = AT_EMPTY_PATH, AT_SYMLINK_NOFOLLOW, AT_SYMLINK_FOLLOW, AT_NO_AUTOMOUNT, AT_EMPTY_PATH
fallocate_mode = FALLOC_FL_KEEP_SIZE, FALLOC_FL_PUNCH_HOLE, FALLOC_FL_COLLAPSE_RANGE, FALLOC_FL_ZERO_RANGE, FALLOC_FL_INSERT_RANGE, FALLOC_FL_UNSHARE_RANGE, FALLOC_FL_NO_HIDE_STALE
linkat_flags = AT_EMPTY_PATH, AT_SYMLINK_FOLLOW
unlinkat_flags = AT_REMOVEDIR
renameat2_flags = RENAME_EXCHANGE, RENAME_NOREPLACE, RENAME_WHITEOUT
flock_op = LOCK_SH, LOCK_EX, LOCK_UN, LOCK_NB
statx_flags = AT_SYMLINK_NOFOLLOW, AT_SYMLINK_FOLLOW, AT_NO_AUTOMOUNT, AT_EMPTY_PATH, AT_STATX_SYNC_TYPE, AT_STATX_SYNC_AS_STAT, AT_STATX_FORCE_SYNC, AT_STATX_DONT_SYNC
statx_mask = STATX_TYPE, STATX_MODE, STATX_NLINK, STATX_UID, STATX_GID, STATX_ATIME, STATX_MTIME, STATX_CTIME, STATX_INO, STATX_SIZE, STATX_BLOCKS, STATX_BASIC_STATS, STATX_BTIME
name_to_handle_at_flags = AT_EMPTY_PATH, AT_SYMLINK_FOLLOW, AT_HANDLE_FID
mq_open_flags = O_WRONLY, O_RDWR, O_NONBLOCK, O_CREAT, O_EXCL, O_CREAT
finit_module_flags = MODULE_INIT_IGNORE_MODVERSIONS, MODULE_INIT_IGNORE_VERMAGIC
delete_module_flags = O_NONBLOCK, O_TRUNC
kexec_load_flags = KEXEC_ON_CRASH, KEXEC_PRESERVE_CONTEXT, KEXEC_ARCH_386, KEXEC_ARCH_X86_64, KEXEC_ARCH_PPC, KEXEC_ARCH_PPC64, KEXEC_ARCH_IA_64, KEXEC_ARCH_ARM, KEXEC_ARCH_S390, KEXEC_ARCH_SH, KEXEC_ARCH_MIPS, KEXEC_ARCH_MIPS_LE, KEXEC_ARCH_DEFAULT
faccessat_flags = AT_EACCESS, AT_SYMLINK_NOFOLLOW, AT_EMPTY_PATH
sync_file_flags = SYNC_FILE_RANGE_WAIT_BEFORE, SYNC_FILE_RANGE_WRITE, SYNC_FILE_RANGE_WAIT_AFTER
kcmp_flags = KCMP_FILE, KCMP_FILES, KCMP_FS, KCMP_IO, KCMP_SIGHAND, KCMP_SYSVSEM, KCMP_VM
rusage_who = RUSAGE_SELF, RUSAGE_CHILDREN, RUSAGE_THREAD
rlimit_type = RLIMIT_AS, RLIMIT_CORE, RLIMIT_CPU, RLIMIT_DATA, RLIMIT_FSIZE, RLIMIT_LOCKS, RLIMIT_MEMLOCK, RLIMIT_MSGQUEUE, RLIMIT_NICE, RLIMIT_NOFILE, RLIMIT_NPROC, RLIMIT_RSS, RLIMIT_RTPRIO, RLIMIT_RTTIME, RLIMIT_SIGPENDING, RLIMIT_STACK
# The ones that disable console output are intentionally omitted.
syslog_cmd = SYSLOG_ACTION_CLOSE, SYSLOG_ACTION_OPEN, SYSLOG_ACTION_READ, SYSLOG_ACTION_READ_ALL, SYSLOG_ACTION_READ_CLEAR, SYSLOG_ACTION_SIZE_UNREAD, SYSLOG_ACTION_SIZE_BUFFER
ioprio_which_pid = IOPRIO_WHO_PROCESS, IOPRIO_WHO_PGRP
ioprio_which_uid = IOPRIO_WHO_USER
personality_flags = PER_LINUX, PER_SVR4, PER_SVR3, PER_OSR5, PER_WYSEV386, PER_ISCR4, PER_BSD, PER_XENIX, PER_LINUX32, PER_IRIX32, PER_IRIXN32, PER_IRIX64, PER_RISCOS, PER_SOLARIS, PER_UW7, PER_OSF4, PER_HPUX, ADDR_NO_RANDOMIZE, MMAP_PAGE_ZERO, ADDR_COMPAT_LAYOUT, READ_IMPLIES_EXEC, ADDR_LIMIT_32BIT, SHORT_INODE, WHOLE_SECONDS, STICKY_TIMEOUTS, ADDR_LIMIT_3GB
clock_id = CLOCK_REALTIME, CLOCK_REALTIME_COARSE, CLOCK_MONOTONIC, CLOCK_MONOTONIC_COARSE, CLOCK_MONOTONIC_RAW, CLOCK_BOOTTIME, CLOCK_PROCESS_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID
sigprocmask_how = SIG_BLOCK, SIG_UNBLOCK, SIG_SETMASK
getitimer_which = ITIMER_REAL, ITIMER_VIRTUAL, ITIMER_PROF
wait_options = WNOHANG, WUNTRACED, WCONTINUED, WEXITED, WSTOPPED, WCONTINUED, WNOHANG, WNOWAIT, __WCLONE, __WALL, __WNOTHREAD
waitid_which = P_PID, P_PGID, P_ALL
sigaction_flags = SA_NOCLDSTOP, SA_NOCLDWAIT, SA_NODEFER, SA_ONSTACK, SA_RESETHAND, SA_RESTART, SA_SIGINFO
timer_flags = TIMER_ABSTIME
utimensat_flags = AT_SYMLINK_NOFOLLOW
priority_which = PRIO_PROCESS, PRIO_PGRP, PRIO_USER
mempolicy_flags = MPOL_F_MEMS_ALLOWED, MPOL_F_ADDR, MPOL_F_NODE
ptrace_req = PTRACE_LISTEN, PTRACE_KILL, PTRACE_INTERRUPT, PTRACE_ATTACH, PTRACE_DETACH
ptrace_req_peek = PTRACE_PEEKTEXT, PTRACE_PEEKDATA
ptrace_req_poke = PTRACE_POKETEXT, PTRACE_POKEDATA
ptrace_req_getregs = PTRACE_GETREGS, PTRACE_GETFPREGS
ptrace_req_setregs = PTRACE_SETREGS, PTRACE_SETFPREGS
ptrace_req_setopts = PTRACE_SETOPTIONS, PTRACE_SEIZE
ptrace_req_cont = PTRACE_CONT, PTRACE_SYSCALL, PTRACE_SINGLESTEP, PTRACE_SYSEMU, PTRACE_SYSEMU_SINGLESTEP
pthread_regset = NT_PRSTATUS, NT_PRFPREG, NT_PRPSINFO, NT_TASKSTRUCT, NT_AUXV, NT_386_TLS, NT_386_IOPERM, NT_X86_XSTATE
ptrace_options = PTRACE_O_EXITKILL, PTRACE_O_TRACECLONE, PTRACE_O_TRACEEXEC, PTRACE_O_TRACEEXIT, PTRACE_O_TRACEFORK, PTRACE_O_TRACESYSGOOD, PTRACE_O_TRACEVFORK, PTRACE_O_TRACEVFORKDONE
fcntl_dupfd = F_DUPFD, F_DUPFD_CLOEXEC
fcntl_getflags = F_GETFD, F_GETFL, F_GETSIG, F_GETLEASE, F_GETPIPE_SZ, F_GET_SEALS
fcntl_lock = F_SETLK, F_SETLKW, F_GETLK, F_OFD_GETLK, F_OFD_SETLK, F_OFD_SETLKW
fcntl_flags = FD_CLOEXEC
fcntl_status = O_APPEND, FASYNC, O_DIRECT, O_NOATIME, O_NONBLOCK
flock_type = F_RDLCK, F_WRLCK, F_UNLCK
f_owner_type = F_OWNER_TID, F_OWNER_PID, F_OWNER_PGRP
fcntl_notify = DN_MULTISHOT, DN_ACCESS, DN_MODIFY, DN_CREATE, DN_DELETE, DN_RENAME, DN_ATTRIB
seal_types = F_SEAL_SEAL, F_SEAL_SHRINK, F_SEAL_GROW, F_SEAL_WRITE
ioctl_int_in = FIONBIO, FIOASYNC
ioctl_int_out = FIOQSIZE, FIGETBSZ
fcntl_rw_hint = RWF_WRITE_LIFE_NOT_SET, RWH_WRITE_LIFE_NONE, RWH_WRITE_LIFE_SHORT, RWH_WRITE_LIFE_MEDIUM, RWH_WRITE_LIFE_LONG, RWH_WRITE_LIFE_EXTREME
fiemap_flags = FIEMAP_FLAG_SYNC, FIEMAP_FLAG_XATTR, FIEMAP_FLAG_CACHE
fiemap_extent_flags = FIEMAP_EXTENT_LAST, FIEMAP_EXTENT_UNKNOWN, FIEMAP_EXTENT_DELALLOC, FIEMAP_EXTENT_ENCODED, FIEMAP_EXTENT_DATA_ENCRYPTED, FIEMAP_EXTENT_NOT_ALIGNED, FIEMAP_EXTENT_DATA_INLINE, FIEMAP_EXTENT_DATA_TAIL, FIEMAP_EXTENT_UNWRITTEN, FIEMAP_EXTENT_MERGED, FIEMAP_EXTENT_SHARED
getrandom_flags = GRND_NONBLOCK, GRND_RANDOM
clone_flags = CLONE_VM, CLONE_FS, CLONE_FILES, CLONE_SIGHAND, CLONE_PTRACE, CLONE_VFORK, CLONE_PARENT, CLONE_THREAD, CLONE_NEWNS, CLONE_SYSVSEM, CLONE_SETTLS, CLONE_PARENT_SETTID, CLONE_CHILD_CLEARTID, CLONE_UNTRACED, CLONE_CHILD_SETTID, CLONE_NEWCGROUP, CLONE_NEWUTS, CLONE_NEWIPC, CLONE_NEWUSER, CLONE_NEWPID, CLONE_NEWNET, CLONE_IO, CLONE_PIDFD, CLONE_NEWTIME
clone3_flags = CLONE_CLEAR_SIGHAND, CLONE_INTO_CGROUP, clone_flags
ptrace_peeksiginfo_flags = PTRACE_PEEKSIGINFO_SHARED
close_range_flags = CLOSE_RANGE_UNSHARE

# Consts for target.SpecialFileLenghts.
_ = SMB_PATH_MAX, XT_CGROUP_PATH_MAX, XENSTORE_REL_PATH_MAX

# misc
_ = KCOV_INIT_TRACE, KCOV_ENABLE, KCOV_DISABLE, KCOV_TRACE_PC, KCOV_TRACE_CMP, PTRACE_TRACEME, SYSLOG_ACTION_CONSOLE_ON, SYSLOG_ACTION_CONSOLE_OFF, SYSLOG_ACTION_CONSOLE_LEVEL, SYSLOG_ACTION_CLEAR, __NR_mmap2

# Hardcode KCOV_REMOTE_ENABLE value for amd64 until new kcov patches reach mainline.
define KCOV_REMOTE_ENABLE	1075340134

# Not yet implemented syscalls:
# umask
# vhangup
# _sysctl
# nfsservctl
# getcpu

# Probably no sense in calling these.
# Also affect system-wide state, so not reproducible anyway.
# sched_get_priority_max
# sched_get_priority_min
]

29/2 2026/03/11 11:14 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 74157
							output: 218
							thoughts: 646
						

Thoughts:
**Analyzing a Kernel Vulnerability Reproducer**

Okay, let's break this down. First, the `mremap` system call: I see the signature: `mremap(addr vma, len len[addr], newlen len[newaddr], flags flags[mremap_flags], newaddr vma)`. And the `mremap_flags` are defined as `MREMAP_MAYMOVE, MREMAP_FIXED, MREMAP_DONTUNMAP`.  So, `MREMAP_MAYMOVE | MREMAP_FIXED` translates to a flag value of `3`.

Now, let's construct a reproducer. The original attempt, as I understand it, involves two threads, A and B.

Thread A looks like this:

```
mmap(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
```

The use of `mmap` with `MAP_FIXED` over existing mappings should trigger `vms_complete_munmap_vmas`. But looking closer, the flags aren't quite right. Let's dig into the `mmap` flags. I see `MAP_FIXED` is `0x10`, `MAP_ANONYMOUS` is `0x20`, and `MAP_PRIVATE` is `0x2`.  So, `0x32` represents `MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE`.

Thread B is defined as:

```
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000002000/0x1000)=nil)
```

But, hold on. `mremap` needs `MREMAP_MAYMOVE` which is `1`. This is important, as `check_mremap_params` checks `vrm_implies_new_addr(vrm)`. Without `MREMAP_MAYMOVE`, it returns `-EINVAL` if `vrm_implies_new_addr(vrm)`.  This is precisely what needs to be triggered. Now, the plan is to craft a syzkaller program to expose this vulnerability.



30/2 2026/03/11 11:14 tool reproduce-crash
0m Args:
map[ReproSyz:mmap(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000002000/0x1000)=nil) (async)
]

Results:
map[ReproducedBugTitle: ReproducedCrashReport:]

31/2 2026/03/11 11:14 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 75659
							output: 638
							thoughts: 806
						

Thoughts:
**Analyzing the Data Race Bug**

Okay, so the reproducer didn't immediately trigger the bug, which means I need to dig deeper. Let me revisit that bug report... Aha, it looks like a read-write data race on `0xffff888120891c88`. The stack traces are pointing towards `vms_complete_munmap_vmas` during `mmap` and `mremap` operations. Seems like we're hitting a race condition related to `mm->map_count`, as I suspected.

The essence of this is that the concurrent execution of `mmap` and `mremap` on the same memory map (`mm`) is the likely trigger. In my syzkaller setup, async calls within the same process are running in different threads. That means they share the same `mm` -- perfect! This gives me the necessary concurrency.

Wait a second... if I use `mmap` with `MAP_FIXED` over existing mappings, wouldn't that call `vms_complete_munmap_vmas` as well? Bingo! The `MAP_FIXED` flag will force the `mmap` to overwrite the existing mappings. This means that if `mmap` and `mremap` happen concurrently, then `mmap` is calling the same function and could possibly cause a race. Now, the next step is to make sure to loop this process to increase our chances of tripping the race.



32/2 2026/03/11 11:14 tool reproduce-crash
0m Args:
map[ReproSyz:mmap(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000002000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000002000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000002000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000002000/0x1000)=nil) (async)
]

Results:
map[ReproducedBugTitle: ReproducedCrashReport:]

33/2 2026/03/11 11:14 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 77441
							output: 40
							thoughts: 779
						

Thoughts:
**Debugging a Persistent Kernel Bug**

Okay, still no crash, which is frustrating. Let's dig into this `mremap` issue. I need to verify that the execution flow is actually reaching the critical checks within `check_mremap_params` and, specifically, the access to `current->mm->map_count`.

Right, let's look at the `vrm_implies_new_addr` function first. It returns true if `vrm->flags` has either `MREMAP_FIXED` or `MREMAP_DONTUNMAP` flags set.  If `flags` are set to `3` (which is `MREMAP_MAYMOVE` and `MREMAP_FIXED`) it returns true, which is what I expect. Then it does some checks.

The code then checks: if the `new_addr` isn't silly, if the `new_addr` isn't page aligned, whether the fixed address implies a move and whether resize is allowed. Finally, it checks if `vrm->old_len` and `vrm->new_len` are equal when `MREMAP_DONTUNMAP` is set. After that it checks for overlap and then the critical one: `current->mm->map_count`.

Now, let's translate this to my syzkaller setup. I'm calling `mremap` with a specific set of parameters. I'm passing `0x7f0000000000` for the old address, a length of `0x1000`, a new length of `0x1000`, a flag of `3` (`MREMAP_MAYMOVE | MREMAP_FIXED`) and finally `0x7f0000002000` for the new address. So, I would expect this to reach `current->mm->map_count` in the check.

I'll check the `mmap` side of things too. It calls `vms_complete_munmap_vmas`. This is called in `__mmap_complete`, which itself is called from `mmap_region`. The `mmap_region` function gets called when `mmap` maps a new region.

Hang on... I thought that `vms_complete_munmap_vmas` was only called if there were pages in the mapping. Actually, I believe that `vms_complete_munmap_vmas` is called unconditionally in `__mmap_complete`. I should check that.



34/2 2026/03/11 11:14 tool codesearch-definition-source
0m Args:
map[ContextFile:mm/vma.c IncludeLines:true Name:__mmap_complete]

Results:
map[SourceCode:2566:	/*
2567:	 * __mmap_complete() - Unmap any VMAs we overlap, account memory mapping
2568:	 *                     statistics, handle locking and finalise the VMA.
2569:	 *
2570:	 * @map: Mapping state.
2571:	 * @vma: Merged or newly allocated VMA for the mmap()'d region.
2572:	 */
2573:	static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
2574:	{
2575:		struct mm_struct *mm = map->mm;
2576:		vm_flags_t vm_flags = vma->vm_flags;
2577:	
2578:		perf_event_mmap(vma);
2579:	
2580:		/* Unmap any existing mapping in the area. */
2581:		vms_complete_munmap_vmas(&map->vms, &map->mas_detach);
2582:	
2583:		vm_stat_account(mm, vma->vm_flags, map->pglen);
2584:		if (vm_flags & VM_LOCKED) {
2585:			if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
2586:						is_vm_hugetlb_page(vma) ||
2587:						vma == get_gate_vma(mm))
2588:				vm_flags_clear(vma, VM_LOCKED_MASK);
2589:			else
2590:				mm->locked_vm += map->pglen;
2591:		}
2592:	
2593:		if (vma->vm_file)
2594:			uprobe_mmap(vma);
2595:	
2596:		/*
2597:		 * New (or expanded) vma always get soft dirty status.
2598:		 * Otherwise user-space soft-dirty page tracker won't
2599:		 * be able to distinguish situation when vma area unmapped,
2600:		 * then new mapped in-place (which must be aimed as
2601:		 * a completely new data area).
2602:		 */
2603:		if (pgtable_supports_soft_dirty())
2604:			vm_flags_set(vma, VM_SOFTDIRTY);
2605:	
2606:		vma_set_page_prot(vma);
2607:	}
 SourceFile:mm/vma.c]

35/2 2026/03/11 11:14 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 79526
							output: 1198
							thoughts: 444
						

Thoughts:
**Analyzing a Potential Data Race in Kernel Memory Management**

Okay, so I'm looking at this `vms_complete_munmap_vmas` function and I see it does this: `mm->map_count -= vms->vma_count`. Even if `vms->vma_count` is zero, the code still decrements by zero. My initial thought is, even if it's technically harmless, KCSAN is going to flag a potential data race on `mm->map_count` if there's a concurrent read, but it's probably not really. Now, wait a minute... `mmap` with `MAP_FIXED` *should* unmap existing mappings, meaning `vms->vma_count` would be greater than zero in that scenario. Okay, let's re-examine `vms_complete_munmap_vmas`... Yep, there it is: a write to `mm->map_count`.

And then, looking at `check_mremap_params`, there's a read of `mm->map_count` within this check: `if ((current->mm->map_count + 2) >= sysctl_max_map_count - 3)`. So, the potential data race is between these two functions as expected.

Now, I'm trying to figure out *why* the reproducer isn't triggering the bug. My first thought is: Is KCSAN even enabled in the test environment? I need to verify that. But then, it occurs to me that the `reproduce-crash` tool tests whether a syz repro program crashes the kernel. That program compiles and executes it in a VM. If this *is* a data race, it might be tough to trigger reliably, which would explain the lack of a crash.

Okay, let's try increasing the probability of triggering the race. We need to increase the number of threads or iterations. I could use a loop in a C program, or just leverage syzkaller's `async` calls with many iterations. Since syzkaller's `async` calls execute in separate threads, that's the more likely to trigger a data race. My next step will be to write a C reproducer using pseudo syscalls (if appropriate), or just write a syzkaller program with many `mmap` and `mremap` calls.



36/2 2026/03/11 11:15 tool reproduce-crash
0m Args:
map[ReproSyz:mmap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000001000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000001000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000001000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000001000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000001000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000001000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000001000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000001000/0x1000)=nil) (async)
]

Results:
map[ReproducedBugTitle: ReproducedCrashReport:]

37/2 2026/03/11 11:15 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 81703
							output: 39
							thoughts: 1394
						

Thoughts:
**Analyzing `mremap` and Potential KCSAN Misses**

Okay, still no crash, which means I need to dig deeper. I'm taking another look at the `mremap` arguments to make sure I'm not missing anything obvious. Specifically: `mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000001000/0x1000)=nil)`. The `flags` are `MREMAP_MAYMOVE | MREMAP_FIXED` (value 3). The `newaddr` is `0x7f0000001000`. The `old_len` and `new_len` are both `0x1000`, and `addr` is `0x7f0000000000`.

Let's see, `mremap` has an overlap check: `vrm_overlaps(vrm)`. And `vrm_overlaps` checks if `end_old > start_new && end_new > start_old`. In this case: `start_old` is `0x7f0000000000`, `end_old` is `0x7f0000001000`, `start_new` is `0x7f0000001000`, and `end_new` is `0x7f0000002000`. So, that boils down to `0x7f0000001000 > 0x7f0000001000` which is false. No overlap, good.

But wait, does `mremap` actually *need* an existing VMA at `old_addr`? Yes, it looks up the VMA. Okay, now I remember, `check_mremap_params` is called *before* the VMA lookup. It looks like `check_mremap_params` doesn't care if the VMA exists, at all! This means it'll happily read `current->mm->map_count` regardless of whether the VMA is valid.

So why isn't KCSAN catching anything? Maybe the race window is too small? Or, perhaps `mmap` doesn't unmap anything because the VMA is already there, it just maps "over" it? Let's check `__mmap_complete`. Looks like it calls `vms_complete_munmap_vmas`. Okay, it seems that if `map->vms.vma_count` is zero, it still performs `mm->map_count -= 0`.

But, wait... `mm->map_count -= 0` is still a write! Even if zero! I should have seen something... But... C could optimize that to nothing, right? If the compiler knew it was zero. But the count `vms->vma_count` isn't compile-time constant! It's a read-modify-write on an `int`. Huh. Still, KCSAN *should* be reporting something if the value changes, and it doesn't.

Looking closer at the KCSAN report examples, it shows "value changed: 0x00000033 -> 0x00000034". But this means `map_count` increased! How could `mm->map_count` *increase* in `vms_complete_munmap_vmas`? Unless... `vms->vma_count` is negative? No, it's an `int`, but it represents the number of VMAs to unmap. Wait, the thing that adds to the count is `mmap_region`.

Okay, I need to check `mm/vma.c:2830` to check where that count is being modified. Aha! I think I see a possible issue. This needs more careful looking.



38/2 2026/03/11 11:15 tool codesearch-definition-source
0m Args:
map[ContextFile:mm/vma.c IncludeLines:true Name:mmap_region]

Results:
map[SourceCode:2781:	/**
2782:	 * mmap_region() - Actually perform the userland mapping of a VMA into
2783:	 * current->mm with known, aligned and overflow-checked @addr and @len, and
2784:	 * correctly determined VMA flags @vm_flags and page offset @pgoff.
2785:	 *
2786:	 * This is an internal memory management function, and should not be used
2787:	 * directly.
2788:	 *
2789:	 * The caller must write-lock current->mm->mmap_lock.
2790:	 *
2791:	 * @file: If a file-backed mapping, a pointer to the struct file describing the
2792:	 * file to be mapped, otherwise NULL.
2793:	 * @addr: The page-aligned address at which to perform the mapping.
2794:	 * @len: The page-aligned, non-zero, length of the mapping.
2795:	 * @vm_flags: The VMA flags which should be applied to the mapping.
2796:	 * @pgoff: If @file is specified, the page offset into the file, if not then
2797:	 * the virtual page offset in memory of the anonymous mapping.
2798:	 * @uf: Optionally, a pointer to a list head used for tracking userfaultfd unmap
2799:	 * events.
2800:	 *
2801:	 * Returns: Either an error, or the address at which the requested mapping has
2802:	 * been performed.
2803:	 */
2804:	unsigned long mmap_region(struct file *file, unsigned long addr,
2805:				  unsigned long len, vm_flags_t vm_flags, unsigned long pgoff,
2806:				  struct list_head *uf)
2807:	{
2808:		unsigned long ret;
2809:		bool writable_file_mapping = false;
2810:	
2811:		mmap_assert_write_locked(current->mm);
2812:	
2813:		/* Check to see if MDWE is applicable. */
2814:		if (map_deny_write_exec(vm_flags, vm_flags))
2815:			return -EACCES;
2816:	
2817:		/* Allow architectures to sanity-check the vm_flags. */
2818:		if (!arch_validate_flags(vm_flags))
2819:			return -EINVAL;
2820:	
2821:		/* Map writable and ensure this isn't a sealed memfd. */
2822:		if (file && is_shared_maywrite(vm_flags)) {
2823:			int error = mapping_map_writable(file->f_mapping);
2824:	
2825:			if (error)
2826:				return error;
2827:			writable_file_mapping = true;
2828:		}
2829:	
2830:		ret = __mmap_region(file, addr, len, vm_flags, pgoff, uf);
2831:	
2832:		/* Clear our write mapping regardless of error. */
2833:		if (writable_file_mapping)
2834:			mapping_unmap_writable(file->f_mapping);
2835:	
2836:		validate_mm(current->mm);
2837:		return ret;
2838:	}
 SourceFile:mm/vma.c]

39/2 2026/03/11 11:15 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 85001
							output: 40
							thoughts: 30
						

Thoughts:
Here's my thought process, as if I were analyzing that code:

**Diving into the Memory Mapping Logic**

Okay, so I'm zeroing in on this `__mmap_region` function. I see it's down at line 2830, which gives me a rough idea of its place within the file's overall structure. It's likely a core component of how the system handles memory mapping, a crucial operation for file I/O, shared memory, and a whole host of other critical functionality. Given that it's positioned at line 2830, I expect it to be a key element for resource allocation.

Now, my next move is clear: I need to dig into `__mmap_region`. Understanding its internal workings is essential. I want to dissect its arguments, see how it's called, and grasp exactly how it interacts with the kernel's memory management subsystems. It's time to trace the execution and understand the underlying implementation and possible performance bottlenecks. Let's see what we've got.



40/2 2026/03/11 11:15 tool codesearch-definition-source
0m Args:
map[ContextFile:mm/vma.c IncludeLines:true Name:__mmap_region]

Results:
map[SourceCode:2713:	static unsigned long __mmap_region(struct file *file, unsigned long addr,
2714:			unsigned long len, vm_flags_t vm_flags, unsigned long pgoff,
2715:			struct list_head *uf)
2716:	{
2717:		struct mm_struct *mm = current->mm;
2718:		struct vm_area_struct *vma = NULL;
2719:		bool have_mmap_prepare = file && file->f_op->mmap_prepare;
2720:		VMA_ITERATOR(vmi, mm, addr);
2721:		MMAP_STATE(map, mm, &vmi, addr, len, pgoff, vm_flags, file);
2722:		struct vm_area_desc desc = {
2723:			.mm = mm,
2724:			.file = file,
2725:			.action = {
2726:				.type = MMAP_NOTHING, /* Default to no further action. */
2727:			},
2728:		};
2729:		bool allocated_new = false;
2730:		int error;
2731:	
2732:		map.check_ksm_early = can_set_ksm_flags_early(&map);
2733:	
2734:		error = __mmap_setup(&map, &desc, uf);
2735:		if (!error && have_mmap_prepare)
2736:			error = call_mmap_prepare(&map, &desc);
2737:		if (error)
2738:			goto abort_munmap;
2739:	
2740:		if (map.check_ksm_early)
2741:			update_ksm_flags(&map);
2742:	
2743:		/* Attempt to merge with adjacent VMAs... */
2744:		if (map.prev || map.next) {
2745:			VMG_MMAP_STATE(vmg, &map, /* vma = */ NULL);
2746:	
2747:			vma = vma_merge_new_range(&vmg);
2748:		}
2749:	
2750:		/* ...but if we can't, allocate a new VMA. */
2751:		if (!vma) {
2752:			error = __mmap_new_vma(&map, &vma);
2753:			if (error)
2754:				goto unacct_error;
2755:			allocated_new = true;
2756:		}
2757:	
2758:		if (have_mmap_prepare)
2759:			set_vma_user_defined_fields(vma, &map);
2760:	
2761:		__mmap_complete(&map, vma);
2762:	
2763:		if (have_mmap_prepare && allocated_new) {
2764:			error = call_action_complete(&map, &desc, vma);
2765:	
2766:			if (error)
2767:				return error;
2768:		}
2769:	
2770:		return addr;
2771:	
2772:		/* Accounting was done by __mmap_setup(). */
2773:	unacct_error:
2774:		if (map.charged)
2775:			vm_unacct_memory(map.charged);
2776:	abort_munmap:
2777:		vms_abort_munmap_vmas(&map.vms, &map.mas_detach);
2778:		return error;
2779:	}
 SourceFile:mm/vma.c]

41/2 2026/03/11 11:15 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 86328
							output: 43
							thoughts: 62
						

Thoughts:
Okay, let me think this through.

**Analyzing the Memory Mapping Process**

Alright, so I'm digging into this memory mapping code and I've got a couple of key function calls highlighted. I see `__mmap_complete` called at line 2761. That's usually the final step, right? Signifying the completion of a mapping. Now, I'm trying to trace back the process, and I see `__mmap_new_vma` called a bit earlier at line 2752. This looks like the place where a *new* virtual memory area, or VMA, is actually established.

My immediate question is, does `__mmap_new_vma` update that critical `mm->map_count` field? That counter is so important. It's the core of tracking how many memory mappings are currently active within a given memory management structure. If `__mmap_new_vma` *doesn't* increment it, then something isn't right. I'll need to jump into that function definition and see what's happening internally to confirm my assumption. Getting this detail right is crucial for understanding how the kernel manages its memory resources.



42/2 2026/03/11 11:15 tool codesearch-definition-source
0m Args:
map[ContextFile:mm/vma.c IncludeLines:true Name:__mmap_new_vma]

Results:
map[SourceCode:2490:	/*
2491:	 * __mmap_new_vma() - Allocate a new VMA for the region, as merging was not
2492:	 * possible.
2493:	 *
2494:	 * @map:  Mapping state.
2495:	 * @vmap: Output pointer for the new VMA.
2496:	 *
2497:	 * Returns: Zero on success, or an error.
2498:	 */
2499:	static int __mmap_new_vma(struct mmap_state *map, struct vm_area_struct **vmap)
2500:	{
2501:		struct vma_iterator *vmi = map->vmi;
2502:		int error = 0;
2503:		struct vm_area_struct *vma;
2504:	
2505:		/*
2506:		 * Determine the object being mapped and call the appropriate
2507:		 * specific mapper. the address has already been validated, but
2508:		 * not unmapped, but the maps are removed from the list.
2509:		 */
2510:		vma = vm_area_alloc(map->mm);
2511:		if (!vma)
2512:			return -ENOMEM;
2513:	
2514:		vma_iter_config(vmi, map->addr, map->end);
2515:		vma_set_range(vma, map->addr, map->end, map->pgoff);
2516:		vm_flags_init(vma, map->vm_flags);
2517:		vma->vm_page_prot = map->page_prot;
2518:	
2519:		if (vma_iter_prealloc(vmi, vma)) {
2520:			error = -ENOMEM;
2521:			goto free_vma;
2522:		}
2523:	
2524:		if (map->file)
2525:			error = __mmap_new_file_vma(map, vma);
2526:		else if (map->vm_flags & VM_SHARED)
2527:			error = shmem_zero_setup(vma);
2528:		else
2529:			vma_set_anonymous(vma);
2530:	
2531:		if (error)
2532:			goto free_iter_vma;
2533:	
2534:		if (!map->check_ksm_early) {
2535:			update_ksm_flags(map);
2536:			vm_flags_init(vma, map->vm_flags);
2537:		}
2538:	
2539:	#ifdef CONFIG_SPARC64
2540:		/* TODO: Fix SPARC ADI! */
2541:		WARN_ON_ONCE(!arch_validate_flags(map->vm_flags));
2542:	#endif
2543:	
2544:		/* Lock the VMA since it is modified after insertion into VMA tree */
2545:		vma_start_write(vma);
2546:		vma_iter_store_new(vmi, vma);
2547:		map->mm->map_count++;
2548:		vma_link_file(vma, map->hold_file_rmap_lock);
2549:	
2550:		/*
2551:		 * vma_merge_new_range() calls khugepaged_enter_vma() too, the below
2552:		 * call covers the non-merge case.
2553:		 */
2554:		if (!vma_is_anonymous(vma))
2555:			khugepaged_enter_vma(vma, map->vm_flags);
2556:		*vmap = vma;
2557:		return 0;
2558:	
2559:	free_iter_vma:
2560:		vma_iter_free(vmi);
2561:	free_vma:
2562:		vm_area_free(vma);
2563:		return error;
2564:	}
 SourceFile:mm/vma.c]

43/2 2026/03/11 11:15 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 87863
							output: 917
							thoughts: 561
						

Thoughts:
**Analyzing a Potential Data Race in Memory Management**

Okay, so I've been looking at this code and, at line 2547, `map->mm->map_count++` is where it gets interesting. `mmap` increments `map_count`, and then calls `__mmap_complete`, which in turn calls `vms_complete_munmap_vmas`, which *decrements* `map_count`. Both of these operations are protected by the `mmap_write_lock`, which is good... but then `check_mremap_params` reads `map_count` *without* holding that lock. That’s a data race, plain and simple.

To reproduce this, we just need concurrent calls to `mmap` and `mremap`. The syzkaller reproducer *should* be able to hit this eventually, if it runs long enough. I thought about writing a C program with threads to explicitly loop `mmap` and `mremap`, but then I remembered syzkaller's `async` feature, which would be ideal. My concern with `async`, though, is that syzkaller might not run it enough times to make the data race window big enough. So, how can I stretch the race window?

Well, `mmap` with `MAP_FIXED` over many VMAs would make `vms_complete_munmap_vmas` take longer. So, if I create a bunch of VMAs and then `mmap` over them, that will buy me some time. Thinking practically, the fuzzer is the best place to find this, as it is designed to run the code a large number of times.

Let's try creating a thousand VMAs and then running `mmap` over them.

```
mmap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000001000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
...
```

Ah, but syzkaller programs have size limits. I could just use a loop in C, but, wait, I'm already working with syzkaller, so that seems counterproductive. I know syzkaller has a `fault_injection` feature, but that’s not really what I need here; I don't want to induce a fault, I want a delay.

So, let's just write a simple syzkaller program and let the fuzzer do the heavy lifting! Maybe the `reproduce-crash` tool hasn't triggered it, but maybe I'll get better results by writing a syzkaller program that creates a few VMAs and then `mmap`s over them, focusing on increasing that window for a race condition.



44/2 2026/03/11 11:15 tool reproduce-crash
0m Args:
map[ReproSyz:mmap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000004000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000006000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000008000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f000000a000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f000000c000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f000000e000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000000000/0x10000)=nil, 0x10000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000010000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x10000)=nil, 0x10000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000010000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x10000)=nil, 0x10000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000010000/0x1000)=nil) (async)
]

Results:
map[ReproducedBugTitle: ReproducedCrashReport:]

45/2 2026/03/11 11:15 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 90034
							output: 917
							thoughts: 66
						

Thoughts:
Okay, let's see, no crash yet. Hmm, perhaps I'm approaching this wrong. I'm going to try a different tack. Maybe using `mremap` with the `MREMAP_DONTUNMAP` flag will offer a solution. Let me check... Ah yes, `MREMAP_DONTUNMAP` is represented by the value `4`. And `MREMAP_MAYMOVE` is `1`. Therefore, if I combine them, the `flags` parameter I need should be `5`. Right, let's give that a shot and see if this new strategy brings the desired result.



46/2 2026/03/11 11:16 tool reproduce-crash
0m Args:
map[ReproSyz:mmap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000004000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000006000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000008000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f000000a000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f000000c000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f000000e000/0x1000)=nil, 0x1000, 0x3, 0x32, 0xffffffffffffffff, 0x0)
mmap(&(0x7f0000000000/0x10000)=nil, 0x10000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x5, &(0x7f0000010000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x10000)=nil, 0x10000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x5, &(0x7f0000010000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x10000)=nil, 0x10000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x5, &(0x7f0000010000/0x1000)=nil) (async)
]

Results:
map[ReproducedBugTitle: ReproducedCrashReport:]

47/2 2026/03/11 11:16 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 91174
							output: 44
							thoughts: 632
						

Thoughts:
**Debugging a Potential mremap Issue**

Okay, still no crash, which is good, but I need to understand this `mremap` behavior more precisely. Let's see... the `MREMAP_DONTUNMAP` flag, that seems to be the key to this function. It specifically requires `old_len == new_len`, that I know. And `MREMAP_MAYMOVE` is also mandatory. But I need to revisit the other constraints. Is the VMA required to be anonymous and private? No, that's wrong; that must not be right. It only mandates `old_len` and `new_len` to be equal.

Okay, does `MREMAP_DONTUNMAP` force the user to set a `new_addr`? No, it implies that the kernel will assign `new_addr` if I *don't* set `MREMAP_FIXED`, right? Wait a second, if `MREMAP_FIXED` *isn't* set, is `new_addr` then ignored? That's what I am thinking. However, there's a check in `check_mremap_params`:

```c
	/* Remainder of checks are for cases with specific new_addr. */
	if (!vrm_implies_new_addr(vrm))
		return 0;
```

This `vrm_implies_new_addr` function is the key function. This checks if `vrm->flags & (MREMAP_FIXED | MREMAP_DONTUNMAP)` is true. So, if `MREMAP_DONTUNMAP` is indeed set, that check passes. It then does another check:

```c
	/* Is the new address silly? */
	if (vrm->new_addr > TASK_SIZE - vrm->new_len)
		return -EINVAL;
```

So, it *does* check `new_addr`. If `MREMAP_FIXED` is *not* set, then `vrm->new_addr` is whatever the user passed, regardless of what I am thinking.

Now, let me go back to the `sys_mremap` function itself in `mm/mremap.c`. It takes these parameters:

```c
SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len,
		unsigned long, new_len, unsigned long, flags,
		unsigned long, new_addr)
```

And `do_mremap` initializes the `vrm` struct:

```c
	struct vma_remap_struct vrm = {
		.addr = addr,
		.old_len = old_len,
		.new_len = new_len,
		.flags = flags,
		.new_addr = new_addr,
	};
```

So, if `flags` is `5` which would correspond to `MREMAP_DONTUNMAP | MREMAP_MAYMOVE`, then `vrm_implies_new_addr` *is* true, and it does go on to check `new_addr`.

So then I had it wrong... Wait, if `MREMAP_FIXED` is *not* set, the `new_addr` is *not* ignored. It looks like `MREMAP_DONTUNMAP` *requires* a valid `new_addr`. Right? I need to confirm this. Time to check the `mremap` man page or dive into the source code to get it all straight.



48/2 2026/03/11 11:16 tool codesearch-definition-source
0m Args:
map[ContextFile:mm/mremap.c IncludeLines:true Name:__do_sys_mremap]

Results:
map[SourceCode:1958:	/*
1959:	 * Expand (or shrink) an existing mapping, potentially moving it at the
1960:	 * same time (controlled by the MREMAP_MAYMOVE flag and available VM space)
1961:	 *
1962:	 * MREMAP_FIXED option added 5-Dec-1999 by Benjamin LaHaise
1963:	 * This option implies MREMAP_MAYMOVE.
1964:	 */
1965:	SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len,
1966:			unsigned long, new_len, unsigned long, flags,
1967:			unsigned long, new_addr)
1968:	{
1969:		struct vm_userfaultfd_ctx uf = NULL_VM_UFFD_CTX;
1970:		LIST_HEAD(uf_unmap_early);
1971:		LIST_HEAD(uf_unmap);
1972:		/*
1973:		 * There is a deliberate asymmetry here: we strip the pointer tag
1974:		 * from the old address but leave the new address alone. This is
1975:		 * for consistency with mmap(), where we prevent the creation of
1976:		 * aliasing mappings in userspace by leaving the tag bits of the
1977:		 * mapping address intact. A non-zero tag will cause the subsequent
1978:		 * range checks to reject the address as invalid.
1979:		 *
1980:		 * See Documentation/arch/arm64/tagged-address-abi.rst for more
1981:		 * information.
1982:		 */
1983:		struct vma_remap_struct vrm = {
1984:			.addr = untagged_addr(addr),
1985:			.old_len = old_len,
1986:			.new_len = new_len,
1987:			.flags = flags,
1988:			.new_addr = new_addr,
1989:	
1990:			.uf = &uf,
1991:			.uf_unmap_early = &uf_unmap_early,
1992:			.uf_unmap = &uf_unmap,
1993:	
1994:			.remap_type = MREMAP_INVALID, /* We set later. */
1995:		};
1996:	
1997:		return do_mremap(&vrm);
1998:	}
 SourceFile:mm/mremap.c]

49/2 2026/03/11 11:16 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Tokens:
							input: 93326
							output: 1155
							thoughts: 802
						

Thoughts:
**Analyzing the `mremap` Data Race and Crafting a Reproducer**

Okay, let's break this down. The `check_mremap_params` function is the key. It looks like with `MREMAP_MAYMOVE | MREMAP_FIXED`, the code eventually checks `current->mm->map_count`. That's interesting, and it suggests a potential race condition. The reported bug says KCSAN caught it, so it *should* be detectable. The fact that the original reproducer didn't trigger it has me wondering about the size of the race window.

My first thought is to try and widen that window. The idea is to unmap a bunch of VMAs to increase contention. I quickly sketched out a sequence of `mmap` calls to do that. Initially, I thought about unmapping 8 VMAs, then 100... but syzkaller's program length restrictions are a constraint. Using a C loop inside `syz_execute_func` won't work either because it's disabled.

Then it hits me: this *should* be a pretty straightforward data race. Why not just try the simplest thing first? I don't have the original reproducer, but let's try to replicate the same approach with multiple threads and concurrent `mmap` and `mremap` calls.

My initial thought was to use a basic program with one `mmap` and one `mremap` call. If I repeat this call 10 times, it would create 20 threads. However, syzkaller limits thread counts to 16. So I can do 8 pairs of those calls instead.



50/2 2026/03/11 11:16 tool reproduce-crash
0m Args:
map[ReproSyz:mmap(&(0x7f0000000000/0x10000)=nil, 0x10000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000010000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x10000)=nil, 0x10000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000010000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x10000)=nil, 0x10000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000010000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x10000)=nil, 0x10000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000010000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x10000)=nil, 0x10000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000010000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x10000)=nil, 0x10000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000010000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x10000)=nil, 0x10000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000010000/0x1000)=nil) (async)
mmap(&(0x7f0000000000/0x10000)=nil, 0x10000, 0x3, 0x32, 0xffffffffffffffff, 0x0) (async)
mremap(&(0x7f0000000000/0x1000)=nil, 0x1000, 0x1000, 0x3, &(0x7f0000010000/0x1000)=nil) (async)
]

Results:
map[ReproducedBugTitle: ReproducedCrashReport:]

51/2 2026/03/11 11:16 llm crash-repro-finder
0m Model:
gemini-3-pro-preview

Error:
 (RECITATION)

LLM Calls Summary:
Total Calls Total Tokens Avg Tokens Total Duration (Seconds) Avg Duration (Seconds)
Tool Calls Summary:
Total Calls Total Duration (Seconds) Avg Duration (Seconds)