Skip to content

Commit ebf7d1f

Browse files
mfijalkoAlexei Starovoitov
authored and
Alexei Starovoitov
committed
bpf, x64: rework pro/epilogue and tailcall handling in JIT
This commit serves two things: 1) it optimizes BPF prologue/epilogue generation 2) it makes possible to have tailcalls within BPF subprogram Both points are related to each other since without 1), 2) could not be achieved. In [1], Alexei says: "The prologue will look like: nop5 xor eax,eax  // two new bytes if bpf_tail_call() is used in this // function push rbp mov rbp, rsp sub rsp, rounded_stack_depth push rax // zero init tail_call counter variable number of push rbx,r13,r14,r15 Then bpf_tail_call will pop variable number rbx,.. and final 'pop rax' Then 'add rsp, size_of_current_stack_frame' jmp to next function and skip over 'nop5; xor eax,eax; push rpb; mov rbp, rsp' This way new function will set its own stack size and will init tail call counter with whatever value the parent had. If next function doesn't use bpf_tail_call it won't have 'xor eax,eax'. Instead it would need to have 'nop2' in there." Implement that suggestion. Since the layout of stack is changed, tail call counter handling can not rely anymore on popping it to rbx just like it have been handled for constant prologue case and later overwrite of rbx with actual value of rbx pushed to stack. Therefore, let's use one of the register (%rcx) that is considered to be volatile/caller-saved and pop the value of tail call counter in there in the epilogue. Drop the BUILD_BUG_ON in emit_prologue and in emit_bpf_tail_call_indirect where instruction layout is not constant anymore. Introduce new poke target, 'tailcall_bypass' to poke descriptor that is dedicated for skipping the register pops and stack unwind that are generated right before the actual jump to target program. For case when the target program is not present, BPF program will skip the pop instructions and nop5 dedicated for jmpq $target. An example of such state when only R6 of callee saved registers is used by program: ffffffffc0513aa1: e9 0e 00 00 00 jmpq 0xffffffffc0513ab4 ffffffffc0513aa6: 5b pop %rbx ffffffffc0513aa7: 58 pop %rax ffffffffc0513aa8: 48 81 c4 00 00 00 00 add $0x0,%rsp ffffffffc0513aaf: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1) ffffffffc0513ab4: 48 89 df mov %rbx,%rdi When target program is inserted, the jump that was there to skip pops/nop5 will become the nop5, so CPU will go over pops and do the actual tailcall. One might ask why there simply can not be pushes after the nop5? In the following example snippet: ffffffffc037030c: 48 89 fb mov %rdi,%rbx (...) ffffffffc0370332: 5b pop %rbx ffffffffc0370333: 58 pop %rax ffffffffc0370334: 48 81 c4 00 00 00 00 add $0x0,%rsp ffffffffc037033b: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1) ffffffffc0370340: 48 81 ec 00 00 00 00 sub $0x0,%rsp ffffffffc0370347: 50 push %rax ffffffffc0370348: 53 push %rbx ffffffffc0370349: 48 89 df mov %rbx,%rdi ffffffffc037034c: e8 f7 21 00 00 callq 0xffffffffc0372548 There is the bpf2bpf call (at ffffffffc037034c) right after the tailcall and jump target is not present. ctx is in %rbx register and BPF subprogram that we will call into on ffffffffc037034c is relying on it, e.g. it will pick ctx from there. Such code layout is therefore broken as we would overwrite the content of %rbx with the value that was pushed on the prologue. That is the reason for the 'bypass' approach. Special care needs to be taken during the install/update/remove of tailcall target. In case when target program is not present, the CPU must not execute the pop instructions that precede the tailcall. To address that, the following states can be defined: A nop, unwind, nop B nop, unwind, tail C skip, unwind, nop D skip, unwind, tail A is forbidden (lead to incorrectness). The state transitions between tailcall install/update/remove will work as follows: First install tail call f: C->D->B(f) * poke the tailcall, after that get rid of the skip Update tail call f to f': B(f)->B(f') * poke the tailcall (poke->tailcall_target) and do NOT touch the poke->tailcall_bypass Remove tail call: B(f')->C(f') * poke->tailcall_bypass is poked back to jump, then we wait the RCU grace period so that other programs will finish its execution and after that we are safe to remove the poke->tailcall_target Install new tail call (f''): C(f')->D(f'')->B(f''). * same as first step This way CPU can never be exposed to "unwind, tail" state. Last but not least, when tailcalls get mixed with bpf2bpf calls, it would be possible to encounter the endless loop due to clearing the tailcall counter if for example we would use the tailcall3-like from BPF selftests program that would be subprogram-based, meaning the tailcall would be present within the BPF subprogram. This test, broken down to particular steps, would do: entry -> set tailcall counter to 0, bump it by 1, tailcall to func0 func0 -> call subprog_tail (we are NOT skipping the first 11 bytes of prologue and this subprogram has a tailcall, therefore we clear the counter...) subprog -> do the same thing as entry and then loop forever. To address this, the idea is to go through the call chain of bpf2bpf progs and look for a tailcall presence throughout whole chain. If we saw a single tail call then each node in this call chain needs to be marked as a subprog that can reach the tailcall. We would later feed the JIT with this info and: - set eax to 0 only when tailcall is reachable and this is the entry prog - if tailcall is reachable but there's no tailcall in insns of currently JITed prog then push rax anyway, so that it will be possible to propagate further down the call chain - finally if tailcall is reachable, then we need to precede the 'call' insn with mov rax, [rbp - (stack_depth + 8)] Tail call related cases from test_verifier kselftest are also working fine. Sample BPF programs that utilize tail calls (sockex3, tracex5) work properly as well. [1]: https://lore.kernel.org/bpf/[email protected]/ Suggested-by: Alexei Starovoitov <[email protected]> Signed-off-by: Maciej Fijalkowski <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
1 parent 7f6e431 commit ebf7d1f

File tree

6 files changed

+244
-55
lines changed

6 files changed

+244
-55
lines changed

arch/x86/net/bpf_jit_comp.c

Lines changed: 189 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -221,14 +221,48 @@ struct jit_context {
221221

222222
/* Number of bytes emit_patch() needs to generate instructions */
223223
#define X86_PATCH_SIZE 5
224+
/* Number of bytes that will be skipped on tailcall */
225+
#define X86_TAIL_CALL_OFFSET 11
224226

225-
#define PROLOGUE_SIZE 25
227+
static void push_callee_regs(u8 **pprog, bool *callee_regs_used)
228+
{
229+
u8 *prog = *pprog;
230+
int cnt = 0;
231+
232+
if (callee_regs_used[0])
233+
EMIT1(0x53); /* push rbx */
234+
if (callee_regs_used[1])
235+
EMIT2(0x41, 0x55); /* push r13 */
236+
if (callee_regs_used[2])
237+
EMIT2(0x41, 0x56); /* push r14 */
238+
if (callee_regs_used[3])
239+
EMIT2(0x41, 0x57); /* push r15 */
240+
*pprog = prog;
241+
}
242+
243+
static void pop_callee_regs(u8 **pprog, bool *callee_regs_used)
244+
{
245+
u8 *prog = *pprog;
246+
int cnt = 0;
247+
248+
if (callee_regs_used[3])
249+
EMIT2(0x41, 0x5F); /* pop r15 */
250+
if (callee_regs_used[2])
251+
EMIT2(0x41, 0x5E); /* pop r14 */
252+
if (callee_regs_used[1])
253+
EMIT2(0x41, 0x5D); /* pop r13 */
254+
if (callee_regs_used[0])
255+
EMIT1(0x5B); /* pop rbx */
256+
*pprog = prog;
257+
}
226258

227259
/*
228-
* Emit x86-64 prologue code for BPF program and check its size.
229-
* bpf_tail_call helper will skip it while jumping into another program
260+
* Emit x86-64 prologue code for BPF program.
261+
* bpf_tail_call helper will skip the first X86_TAIL_CALL_OFFSET bytes
262+
* while jumping to another program
230263
*/
231-
static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf)
264+
static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf,
265+
bool tail_call_reachable, bool is_subprog)
232266
{
233267
u8 *prog = *pprog;
234268
int cnt = X86_PATCH_SIZE;
@@ -238,19 +272,18 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf)
238272
*/
239273
memcpy(prog, ideal_nops[NOP_ATOMIC5], cnt);
240274
prog += cnt;
275+
if (!ebpf_from_cbpf) {
276+
if (tail_call_reachable && !is_subprog)
277+
EMIT2(0x31, 0xC0); /* xor eax, eax */
278+
else
279+
EMIT2(0x66, 0x90); /* nop2 */
280+
}
241281
EMIT1(0x55); /* push rbp */
242282
EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
243283
/* sub rsp, rounded_stack_depth */
244284
EMIT3_off32(0x48, 0x81, 0xEC, round_up(stack_depth, 8));
245-
EMIT1(0x53); /* push rbx */
246-
EMIT2(0x41, 0x55); /* push r13 */
247-
EMIT2(0x41, 0x56); /* push r14 */
248-
EMIT2(0x41, 0x57); /* push r15 */
249-
if (!ebpf_from_cbpf) {
250-
/* zero init tail_call_cnt */
251-
EMIT2(0x6a, 0x00);
252-
BUILD_BUG_ON(cnt != PROLOGUE_SIZE);
253-
}
285+
if (tail_call_reachable)
286+
EMIT1(0x50); /* push rax */
254287
*pprog = prog;
255288
}
256289

@@ -314,13 +347,14 @@ static int __bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
314347
mutex_lock(&text_mutex);
315348
if (memcmp(ip, old_insn, X86_PATCH_SIZE))
316349
goto out;
350+
ret = 1;
317351
if (memcmp(ip, new_insn, X86_PATCH_SIZE)) {
318352
if (text_live)
319353
text_poke_bp(ip, new_insn, X86_PATCH_SIZE, NULL);
320354
else
321355
memcpy(ip, new_insn, X86_PATCH_SIZE);
356+
ret = 0;
322357
}
323-
ret = 0;
324358
out:
325359
mutex_unlock(&text_mutex);
326360
return ret;
@@ -337,6 +371,22 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
337371
return __bpf_arch_text_poke(ip, t, old_addr, new_addr, true);
338372
}
339373

374+
static int get_pop_bytes(bool *callee_regs_used)
375+
{
376+
int bytes = 0;
377+
378+
if (callee_regs_used[3])
379+
bytes += 2;
380+
if (callee_regs_used[2])
381+
bytes += 2;
382+
if (callee_regs_used[1])
383+
bytes += 2;
384+
if (callee_regs_used[0])
385+
bytes += 1;
386+
387+
return bytes;
388+
}
389+
340390
/*
341391
* Generate the following code:
342392
*
@@ -351,12 +401,26 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
351401
* goto *(prog->bpf_func + prologue_size);
352402
* out:
353403
*/
354-
static void emit_bpf_tail_call_indirect(u8 **pprog)
404+
static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used,
405+
u32 stack_depth)
355406
{
407+
int tcc_off = -4 - round_up(stack_depth, 8);
356408
u8 *prog = *pprog;
357-
int label1, label2, label3;
409+
int pop_bytes = 0;
410+
int off1 = 49;
411+
int off2 = 38;
412+
int off3 = 16;
358413
int cnt = 0;
359414

415+
/* count the additional bytes used for popping callee regs from stack
416+
* that need to be taken into account for each of the offsets that
417+
* are used for bailing out of the tail call
418+
*/
419+
pop_bytes = get_pop_bytes(callee_regs_used);
420+
off1 += pop_bytes;
421+
off2 += pop_bytes;
422+
off3 += pop_bytes;
423+
360424
/*
361425
* rdi - pointer to ctx
362426
* rsi - pointer to bpf_array
@@ -370,21 +434,19 @@ static void emit_bpf_tail_call_indirect(u8 **pprog)
370434
EMIT2(0x89, 0xD2); /* mov edx, edx */
371435
EMIT3(0x39, 0x56, /* cmp dword ptr [rsi + 16], edx */
372436
offsetof(struct bpf_array, map.max_entries));
373-
#define OFFSET1 (41 + RETPOLINE_RCX_BPF_JIT_SIZE) /* Number of bytes to jump */
437+
#define OFFSET1 (off1 + RETPOLINE_RCX_BPF_JIT_SIZE) /* Number of bytes to jump */
374438
EMIT2(X86_JBE, OFFSET1); /* jbe out */
375-
label1 = cnt;
376439

377440
/*
378441
* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
379442
* goto out;
380443
*/
381-
EMIT2_off32(0x8B, 0x85, -36 - MAX_BPF_STACK); /* mov eax, dword ptr [rbp - 548] */
444+
EMIT2_off32(0x8B, 0x85, tcc_off); /* mov eax, dword ptr [rbp - tcc_off] */
382445
EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */
383-
#define OFFSET2 (30 + RETPOLINE_RCX_BPF_JIT_SIZE)
446+
#define OFFSET2 (off2 + RETPOLINE_RCX_BPF_JIT_SIZE)
384447
EMIT2(X86_JA, OFFSET2); /* ja out */
385-
label2 = cnt;
386448
EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */
387-
EMIT2_off32(0x89, 0x85, -36 - MAX_BPF_STACK); /* mov dword ptr [rbp -548], eax */
449+
EMIT2_off32(0x89, 0x85, tcc_off); /* mov dword ptr [rbp - tcc_off], eax */
388450

389451
/* prog = array->ptrs[index]; */
390452
EMIT4_off32(0x48, 0x8B, 0x8C, 0xD6, /* mov rcx, [rsi + rdx * 8 + offsetof(...)] */
@@ -394,48 +456,84 @@ static void emit_bpf_tail_call_indirect(u8 **pprog)
394456
* if (prog == NULL)
395457
* goto out;
396458
*/
397-
EMIT3(0x48, 0x85, 0xC9); /* test rcx,rcx */
398-
#define OFFSET3 (8 + RETPOLINE_RCX_BPF_JIT_SIZE)
459+
EMIT3(0x48, 0x85, 0xC9); /* test rcx,rcx */
460+
#define OFFSET3 (off3 + RETPOLINE_RCX_BPF_JIT_SIZE)
399461
EMIT2(X86_JE, OFFSET3); /* je out */
400-
label3 = cnt;
401462

402-
/* goto *(prog->bpf_func + prologue_size); */
463+
*pprog = prog;
464+
pop_callee_regs(pprog, callee_regs_used);
465+
prog = *pprog;
466+
467+
EMIT1(0x58); /* pop rax */
468+
EMIT3_off32(0x48, 0x81, 0xC4, /* add rsp, sd */
469+
round_up(stack_depth, 8));
470+
471+
/* goto *(prog->bpf_func + X86_TAIL_CALL_OFFSET); */
403472
EMIT4(0x48, 0x8B, 0x49, /* mov rcx, qword ptr [rcx + 32] */
404473
offsetof(struct bpf_prog, bpf_func));
405-
EMIT4(0x48, 0x83, 0xC1, PROLOGUE_SIZE); /* add rcx, prologue_size */
406-
474+
EMIT4(0x48, 0x83, 0xC1, /* add rcx, X86_TAIL_CALL_OFFSET */
475+
X86_TAIL_CALL_OFFSET);
407476
/*
408477
* Now we're ready to jump into next BPF program
409478
* rdi == ctx (1st arg)
410-
* rcx == prog->bpf_func + prologue_size
479+
* rcx == prog->bpf_func + X86_TAIL_CALL_OFFSET
411480
*/
412481
RETPOLINE_RCX_BPF_JIT();
413482

414483
/* out: */
415-
BUILD_BUG_ON(cnt - label1 != OFFSET1);
416-
BUILD_BUG_ON(cnt - label2 != OFFSET2);
417-
BUILD_BUG_ON(cnt - label3 != OFFSET3);
418484
*pprog = prog;
419485
}
420486

421487
static void emit_bpf_tail_call_direct(struct bpf_jit_poke_descriptor *poke,
422-
u8 **pprog, int addr, u8 *image)
488+
u8 **pprog, int addr, u8 *image,
489+
bool *callee_regs_used, u32 stack_depth)
423490
{
491+
int tcc_off = -4 - round_up(stack_depth, 8);
424492
u8 *prog = *pprog;
493+
int pop_bytes = 0;
494+
int off1 = 27;
495+
int poke_off;
425496
int cnt = 0;
426497

498+
/* count the additional bytes used for popping callee regs to stack
499+
* that need to be taken into account for jump offset that is used for
500+
* bailing out from of the tail call when limit is reached
501+
*/
502+
pop_bytes = get_pop_bytes(callee_regs_used);
503+
off1 += pop_bytes;
504+
505+
/*
506+
* total bytes for:
507+
* - nop5/ jmpq $off
508+
* - pop callee regs
509+
* - sub rsp, $val
510+
* - pop rax
511+
*/
512+
poke_off = X86_PATCH_SIZE + pop_bytes + 7 + 1;
513+
427514
/*
428515
* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
429516
* goto out;
430517
*/
431-
EMIT2_off32(0x8B, 0x85, -36 - MAX_BPF_STACK); /* mov eax, dword ptr [rbp - 548] */
518+
EMIT2_off32(0x8B, 0x85, tcc_off); /* mov eax, dword ptr [rbp - tcc_off] */
432519
EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */
433-
EMIT2(X86_JA, 14); /* ja out */
520+
EMIT2(X86_JA, off1); /* ja out */
434521
EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */
435-
EMIT2_off32(0x89, 0x85, -36 - MAX_BPF_STACK); /* mov dword ptr [rbp -548], eax */
522+
EMIT2_off32(0x89, 0x85, tcc_off); /* mov dword ptr [rbp - tcc_off], eax */
436523

524+
poke->tailcall_bypass = image + (addr - poke_off - X86_PATCH_SIZE);
525+
poke->adj_off = X86_TAIL_CALL_OFFSET;
437526
poke->tailcall_target = image + (addr - X86_PATCH_SIZE);
438-
poke->adj_off = PROLOGUE_SIZE;
527+
poke->bypass_addr = (u8 *)poke->tailcall_target + X86_PATCH_SIZE;
528+
529+
emit_jump(&prog, (u8 *)poke->tailcall_target + X86_PATCH_SIZE,
530+
poke->tailcall_bypass);
531+
532+
*pprog = prog;
533+
pop_callee_regs(pprog, callee_regs_used);
534+
prog = *pprog;
535+
EMIT1(0x58); /* pop rax */
536+
EMIT3_off32(0x48, 0x81, 0xC4, round_up(stack_depth, 8));
439537

440538
memcpy(prog, ideal_nops[NOP_ATOMIC5], X86_PATCH_SIZE);
441539
prog += X86_PATCH_SIZE;
@@ -476,6 +574,11 @@ static void bpf_tail_call_direct_fixup(struct bpf_prog *prog)
476574
(u8 *)target->bpf_func +
477575
poke->adj_off, false);
478576
BUG_ON(ret < 0);
577+
ret = __bpf_arch_text_poke(poke->tailcall_bypass,
578+
BPF_MOD_JUMP,
579+
(u8 *)poke->tailcall_target +
580+
X86_PATCH_SIZE, NULL, false);
581+
BUG_ON(ret < 0);
479582
}
480583
WRITE_ONCE(poke->tailcall_target_stable, true);
481584
mutex_unlock(&array->aux->poke_mutex);
@@ -654,19 +757,49 @@ static bool ex_handler_bpf(const struct exception_table_entry *x,
654757
return true;
655758
}
656759

760+
static void detect_reg_usage(struct bpf_insn *insn, int insn_cnt,
761+
bool *regs_used, bool *tail_call_seen)
762+
{
763+
int i;
764+
765+
for (i = 1; i <= insn_cnt; i++, insn++) {
766+
if (insn->code == (BPF_JMP | BPF_TAIL_CALL))
767+
*tail_call_seen = true;
768+
if (insn->dst_reg == BPF_REG_6 || insn->src_reg == BPF_REG_6)
769+
regs_used[0] = true;
770+
if (insn->dst_reg == BPF_REG_7 || insn->src_reg == BPF_REG_7)
771+
regs_used[1] = true;
772+
if (insn->dst_reg == BPF_REG_8 || insn->src_reg == BPF_REG_8)
773+
regs_used[2] = true;
774+
if (insn->dst_reg == BPF_REG_9 || insn->src_reg == BPF_REG_9)
775+
regs_used[3] = true;
776+
}
777+
}
778+
657779
static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
658780
int oldproglen, struct jit_context *ctx)
659781
{
782+
bool tail_call_reachable = bpf_prog->aux->tail_call_reachable;
660783
struct bpf_insn *insn = bpf_prog->insnsi;
784+
bool callee_regs_used[4] = {};
661785
int insn_cnt = bpf_prog->len;
786+
bool tail_call_seen = false;
662787
bool seen_exit = false;
663788
u8 temp[BPF_MAX_INSN_SIZE + BPF_INSN_SAFETY];
664789
int i, cnt = 0, excnt = 0;
665790
int proglen = 0;
666791
u8 *prog = temp;
667792

793+
detect_reg_usage(insn, insn_cnt, callee_regs_used,
794+
&tail_call_seen);
795+
796+
/* tail call's presence in current prog implies it is reachable */
797+
tail_call_reachable |= tail_call_seen;
798+
668799
emit_prologue(&prog, bpf_prog->aux->stack_depth,
669-
bpf_prog_was_classic(bpf_prog));
800+
bpf_prog_was_classic(bpf_prog), tail_call_reachable,
801+
bpf_prog->aux->func_idx != 0);
802+
push_callee_regs(&prog, callee_regs_used);
670803
addrs[0] = prog - temp;
671804

672805
for (i = 1; i <= insn_cnt; i++, insn++) {
@@ -1104,16 +1237,27 @@ xadd: if (is_imm8(insn->off))
11041237
/* call */
11051238
case BPF_JMP | BPF_CALL:
11061239
func = (u8 *) __bpf_call_base + imm32;
1107-
if (!imm32 || emit_call(&prog, func, image + addrs[i - 1]))
1108-
return -EINVAL;
1240+
if (tail_call_reachable) {
1241+
EMIT3_off32(0x48, 0x8B, 0x85,
1242+
-(bpf_prog->aux->stack_depth + 8));
1243+
if (!imm32 || emit_call(&prog, func, image + addrs[i - 1] + 7))
1244+
return -EINVAL;
1245+
} else {
1246+
if (!imm32 || emit_call(&prog, func, image + addrs[i - 1]))
1247+
return -EINVAL;
1248+
}
11091249
break;
11101250

11111251
case BPF_JMP | BPF_TAIL_CALL:
11121252
if (imm32)
11131253
emit_bpf_tail_call_direct(&bpf_prog->aux->poke_tab[imm32 - 1],
1114-
&prog, addrs[i], image);
1254+
&prog, addrs[i], image,
1255+
callee_regs_used,
1256+
bpf_prog->aux->stack_depth);
11151257
else
1116-
emit_bpf_tail_call_indirect(&prog);
1258+
emit_bpf_tail_call_indirect(&prog,
1259+
callee_regs_used,
1260+
bpf_prog->aux->stack_depth);
11171261
break;
11181262

11191263
/* cond jump */
@@ -1296,12 +1440,9 @@ xadd: if (is_imm8(insn->off))
12961440
seen_exit = true;
12971441
/* Update cleanup_addr */
12981442
ctx->cleanup_addr = proglen;
1299-
if (!bpf_prog_was_classic(bpf_prog))
1300-
EMIT1(0x5B); /* get rid of tail_call_cnt */
1301-
EMIT2(0x41, 0x5F); /* pop r15 */
1302-
EMIT2(0x41, 0x5E); /* pop r14 */
1303-
EMIT2(0x41, 0x5D); /* pop r13 */
1304-
EMIT1(0x5B); /* pop rbx */
1443+
pop_callee_regs(&prog, callee_regs_used);
1444+
if (tail_call_reachable)
1445+
EMIT1(0x59); /* pop rcx, get rid of tail_call_cnt */
13051446
EMIT1(0xC9); /* leave */
13061447
EMIT1(0xC3); /* ret */
13071448
break;

0 commit comments

Comments
 (0)