mirror of
https://github.com/ziglang/zig.git
synced 2026-01-02 03:25:01 +00:00
Make Type.intAbiAlignment match LLVM alignment for x86-windows target
During the LLVM 18 upgrade, two changes were made that changed `@alignOf(u64)` to 4 for the x86-windows target:
- `Type.maxIntAlignment` was made to return 16 for x86 (200e06b). Before that commit, `maxIntAlignment` was 8 for windows/uefi and 4 for everything else
- `Type.intAbiAlignment` was made to return 4 for 33...64 (7e1cba7 + e89d6fc). Before those commits, `intAbiAlignment` would return 8, since the maxIntAlignment for x86-windows was 8 (and for other targets, the `maxIntAlignment` of 4 would clamp the `intAbiAlignment` to 4)
`src/codegen/llvm.zig` has its own alignment calculations that no longer match the values returned from the `Type` functions. For the x86-windows target, this loop:
ddcb7b1c11/src/codegen/llvm.zig (L558-L567)
when the `size` is 64 will set `abi` and `pref` to 64 (meaning an align of 8 bytes), which doesn't match the `Type` alignment of 4.
This commit makes `Type.intAbiAlignment` match the alignment calculated in `codegen/llvm.zig`.
Fixes #20047
Fixes #20466
Fixes #20469
This commit is contained in:
parent
00097c3bb8
commit
304519da27
@ -1563,7 +1563,11 @@ pub fn intAbiAlignment(bits: u16, target: Target, use_llvm: bool) Alignment {
|
||||
0 => .none,
|
||||
1...8 => .@"1",
|
||||
9...16 => .@"2",
|
||||
17...64 => .@"4",
|
||||
17...32 => .@"4",
|
||||
33...64 => switch (target.os.tag) {
|
||||
.uefi, .windows => .@"8",
|
||||
else => .@"4",
|
||||
},
|
||||
else => .@"16",
|
||||
},
|
||||
.x86_64 => switch (bits) {
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user