Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JitArm64: Hardcode fewer registers in EmitBackpatchRoutine #13256

Open
wants to merge 8 commits into
base: master
Choose a base branch
from
52 changes: 30 additions & 22 deletions Source/Core/Core/PowerPC/JitArm64/Jit.h
Original file line number Diff line number Diff line change
Expand Up @@ -250,34 +250,42 @@ class JitArm64 : public JitBase, public Arm64Gen::ARM64CodeBlock, public CommonA
// This is the core routine for accessing emulated memory, with support for
// many different kinds of loads and stores as well as fastmem/backpatching.
//
// Registers used:
// The addr parameter can be any register, but the code emitted for slow accesses
// will be slightly more efficient if the addr parameter is as follows:
//
// addr scratch
// Store: X2 X1
// Load: X1
// Zero 256: X1 X30
// Store float: X2 Q0
// Load float: X1
// Store: W2
// Load: W1
// Zero 256: W1
// Store float: W2
// Load float: W1
//
// If mode == AlwaysFastAccess, the addr argument can be any register.
// Otherwise it must be the register listed in the table above.
// This routine allocates most scratch registers dynamically, but in the following
// situations, specific scratch registers have to be allocated in advance:
//
// Additional scratch registers are used in the following situations:
// emitting_routine && mode == Auto: X0
// emitting_routine && mode == Auto && (flags & BackPatchInfo::FLAG_STORE): X1
// emitting_routine && mode == Auto && !(flags & BackPatchInfo::FLAG_STORE): X3
// emitting_routine && mode != AlwaysSlowAccess && !jo.fastmem: X3
// emitting_routine && mode != AlwaysSlowAccess && !jo.fastmem: X0
// emitting_routine && mode != AlwaysSlowAccess &&
// (flags & BackPatchInfo::FLAG_STORE) && !(flags & BackPatchInfo::FLAG_FLOAT): X1
// emitting_routine && mode != AlwaysSlowAccess &&
// (flags & BackPatchInfo::FLAG_STORE) && (flags & BackPatchInfo::FLAG_FLOAT): Q0
// emitting_routine && mode != AlwaysSlowAccess &&
// (flags & BackPatchInfo::FLAG_ZERO_256): X30
// !emitting_routine && mode == Auto && jo.fastmem: X30
//
// emitting_routine && mode == Auto: X0
// emitting_routine && mode == Auto && !(flags & BackPatchInfo::FLAG_STORE): X3
// emitting_routine && mode != AlwaysSlowAccess && !jo.fastmem: X3
// mode != AlwaysSlowAccess && !jo.fastmem: X0
// !emitting_routine && mode != AlwaysFastAccess && jo.memcheck &&
// (flags & BackPatchInfo::FLAG_LOAD): X0
// !emitting_routine && mode != AlwaysSlowAccess && !jo.fastmem: X30
// !emitting_routine && mode == Auto && jo.fastmem: X30
// If there are any other registers that the caller doesn't mind being overwritten,
// these can be indicated in scratch_gprs and scratch_fprs.
//
// In the following situations, certain host registers must not contain guest registers:
//
// Furthermore, any callee-saved register which isn't marked in gprs_to_push/fprs_to_push
// may be clobbered if mode != AlwaysFastAccess.
// !emitting_routine && mode != AlwaysFastAccess && jo.memcheck: X30
// !emitting_routine && mode != AlwaysFastAccess && jo.memcheck &&
// (flags & BackPatchInfo::FLAG_LOAD): X0
void EmitBackpatchRoutine(u32 flags, MemAccessMode mode, Arm64Gen::ARM64Reg RS,
Arm64Gen::ARM64Reg addr, BitSet32 gprs_to_push = BitSet32(0),
BitSet32 fprs_to_push = BitSet32(0), bool emitting_routine = false);
Arm64Gen::ARM64Reg addr, BitSet32 scratch_gprs = BitSet32(0),
BitSet32 scratch_fprs = BitSet32(0), bool emitting_routine = false);

// Loadstore routines
void SafeLoadToReg(u32 dest, s32 addr, s32 offsetReg, u32 flags, s32 offset, bool update);
Expand Down
Loading