Skip to content

Commit 6ba8a3b

Browse files
Nandita Dukkipatidavem330
Nandita Dukkipati
authored andcommitted
tcp: Tail loss probe (TLP)
This patch series implement the Tail loss probe (TLP) algorithm described in http://tools.ietf.org/html/draft-dukkipati-tcpm-tcp-loss-probe-01. The first patch implements the basic algorithm. TLP's goal is to reduce tail latency of short transactions. It achieves this by converting retransmission timeouts (RTOs) occuring due to tail losses (losses at end of transactions) into fast recovery. TLP transmits one packet in two round-trips when a connection is in Open state and isn't receiving any ACKs. The transmitted packet, aka loss probe, can be either new or a retransmission. When there is tail loss, the ACK from a loss probe triggers FACK/early-retransmit based fast recovery, thus avoiding a costly RTO. In the absence of loss, there is no change in the connection state. PTO stands for probe timeout. It is a timer event indicating that an ACK is overdue and triggers a loss probe packet. The PTO value is set to max(2*SRTT, 10ms) and is adjusted to account for delayed ACK timer when there is only one oustanding packet. TLP Algorithm On transmission of new data in Open state: -> packets_out > 1: schedule PTO in max(2*SRTT, 10ms). -> packets_out == 1: schedule PTO in max(2*RTT, 1.5*RTT + 200ms) -> PTO = min(PTO, RTO) Conditions for scheduling PTO: -> Connection is in Open state. -> Connection is either cwnd limited or no new data to send. -> Number of probes per tail loss episode is limited to one. -> Connection is SACK enabled. When PTO fires: new_segment_exists: -> transmit new segment. -> packets_out++. cwnd remains same. no_new_packet: -> retransmit the last segment. Its ACK triggers FACK or early retransmit based recovery. ACK path: -> rearm RTO at start of ACK processing. -> reschedule PTO if need be. In addition, the patch includes a small variation to the Early Retransmit (ER) algorithm, such that ER and TLP together can in principle recover any N-degree of tail loss through fast recovery. TLP is controlled by the same sysctl as ER, tcp_early_retrans sysctl. tcp_early_retrans==0; disables TLP and ER. ==1; enables RFC5827 ER. ==2; delayed ER. ==3; TLP and delayed ER. [DEFAULT] ==4; TLP only. The TLP patch series have been extensively tested on Google Web servers. It is most effective for short Web trasactions, where it reduced RTOs by 15% and improved HTTP response time (average by 6%, 99th percentile by 10%). The transmitted probes account for <0.5% of the overall transmissions. Signed-off-by: Nandita Dukkipati <[email protected]> Acked-by: Neal Cardwell <[email protected]> Acked-by: Yuchung Cheng <[email protected]> Signed-off-by: David S. Miller <[email protected]>
1 parent 83e519b commit 6ba8a3b

File tree

12 files changed

+171
-28
lines changed

12 files changed

+171
-28
lines changed

Documentation/networking/ip-sysctl.txt

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -190,15 +190,19 @@ tcp_early_retrans - INTEGER
190190
Enable Early Retransmit (ER), per RFC 5827. ER lowers the threshold
191191
for triggering fast retransmit when the amount of outstanding data is
192192
small and when no previously unsent data can be transmitted (such
193-
that limited transmit could be used).
193+
that limited transmit could be used). Also controls the use of
194+
Tail loss probe (TLP) that converts RTOs occuring due to tail
195+
losses into fast recovery (draft-dukkipati-tcpm-tcp-loss-probe-01).
194196
Possible values:
195197
0 disables ER
196198
1 enables ER
197199
2 enables ER but delays fast recovery and fast retransmit
198200
by a fourth of RTT. This mitigates connection falsely
199201
recovers when network has a small degree of reordering
200202
(less than 3 packets).
201-
Default: 2
203+
3 enables delayed ER and TLP.
204+
4 enables TLP only.
205+
Default: 3
202206

203207
tcp_ecn - INTEGER
204208
Control use of Explicit Congestion Notification (ECN) by TCP.

include/linux/tcp.h

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -201,7 +201,6 @@ struct tcp_sock {
201201
unused : 1;
202202
u8 repair_queue;
203203
u8 do_early_retrans:1,/* Enable RFC5827 early-retransmit */
204-
early_retrans_delayed:1, /* Delayed ER timer installed */
205204
syn_data:1, /* SYN includes data */
206205
syn_fastopen:1, /* SYN includes Fast Open option */
207206
syn_data_acked:1;/* data in SYN is acked by SYN-ACK */

include/net/inet_connection_sock.h

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,6 +133,8 @@ struct inet_connection_sock {
133133
#define ICSK_TIME_RETRANS 1 /* Retransmit timer */
134134
#define ICSK_TIME_DACK 2 /* Delayed ack timer */
135135
#define ICSK_TIME_PROBE0 3 /* Zero window probe timer */
136+
#define ICSK_TIME_EARLY_RETRANS 4 /* Early retransmit timer */
137+
#define ICSK_TIME_LOSS_PROBE 5 /* Tail loss probe timer */
136138

137139
static inline struct inet_connection_sock *inet_csk(const struct sock *sk)
138140
{
@@ -222,7 +224,8 @@ static inline void inet_csk_reset_xmit_timer(struct sock *sk, const int what,
222224
when = max_when;
223225
}
224226

225-
if (what == ICSK_TIME_RETRANS || what == ICSK_TIME_PROBE0) {
227+
if (what == ICSK_TIME_RETRANS || what == ICSK_TIME_PROBE0 ||
228+
what == ICSK_TIME_EARLY_RETRANS || what == ICSK_TIME_LOSS_PROBE) {
226229
icsk->icsk_pending = what;
227230
icsk->icsk_timeout = jiffies + when;
228231
sk_reset_timer(sk, &icsk->icsk_retransmit_timer, icsk->icsk_timeout);

include/net/tcp.h

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -543,6 +543,8 @@ extern bool tcp_syn_flood_action(struct sock *sk,
543543
extern void tcp_push_one(struct sock *, unsigned int mss_now);
544544
extern void tcp_send_ack(struct sock *sk);
545545
extern void tcp_send_delayed_ack(struct sock *sk);
546+
extern void tcp_send_loss_probe(struct sock *sk);
547+
extern bool tcp_schedule_loss_probe(struct sock *sk);
546548

547549
/* tcp_input.c */
548550
extern void tcp_cwnd_application_limited(struct sock *sk);
@@ -873,8 +875,8 @@ static inline void tcp_enable_fack(struct tcp_sock *tp)
873875
static inline void tcp_enable_early_retrans(struct tcp_sock *tp)
874876
{
875877
tp->do_early_retrans = sysctl_tcp_early_retrans &&
876-
!sysctl_tcp_thin_dupack && sysctl_tcp_reordering == 3;
877-
tp->early_retrans_delayed = 0;
878+
sysctl_tcp_early_retrans < 4 && !sysctl_tcp_thin_dupack &&
879+
sysctl_tcp_reordering == 3;
878880
}
879881

880882
static inline void tcp_disable_early_retrans(struct tcp_sock *tp)

include/uapi/linux/snmp.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -202,6 +202,7 @@ enum
202202
LINUX_MIB_TCPFORWARDRETRANS, /* TCPForwardRetrans */
203203
LINUX_MIB_TCPSLOWSTARTRETRANS, /* TCPSlowStartRetrans */
204204
LINUX_MIB_TCPTIMEOUTS, /* TCPTimeouts */
205+
LINUX_MIB_TCPLOSSPROBES, /* TCPLossProbes */
205206
LINUX_MIB_TCPRENORECOVERYFAIL, /* TCPRenoRecoveryFail */
206207
LINUX_MIB_TCPSACKRECOVERYFAIL, /* TCPSackRecoveryFail */
207208
LINUX_MIB_TCPSCHEDULERFAILED, /* TCPSchedulerFailed */

net/ipv4/inet_diag.c

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,9 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk,
158158

159159
#define EXPIRES_IN_MS(tmo) DIV_ROUND_UP((tmo - jiffies) * 1000, HZ)
160160

161-
if (icsk->icsk_pending == ICSK_TIME_RETRANS) {
161+
if (icsk->icsk_pending == ICSK_TIME_RETRANS ||
162+
icsk->icsk_pending == ICSK_TIME_EARLY_RETRANS ||
163+
icsk->icsk_pending == ICSK_TIME_LOSS_PROBE) {
162164
r->idiag_timer = 1;
163165
r->idiag_retrans = icsk->icsk_retransmits;
164166
r->idiag_expires = EXPIRES_IN_MS(icsk->icsk_timeout);

net/ipv4/proc.c

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -224,6 +224,7 @@ static const struct snmp_mib snmp4_net_list[] = {
224224
SNMP_MIB_ITEM("TCPForwardRetrans", LINUX_MIB_TCPFORWARDRETRANS),
225225
SNMP_MIB_ITEM("TCPSlowStartRetrans", LINUX_MIB_TCPSLOWSTARTRETRANS),
226226
SNMP_MIB_ITEM("TCPTimeouts", LINUX_MIB_TCPTIMEOUTS),
227+
SNMP_MIB_ITEM("TCPLossProbes", LINUX_MIB_TCPLOSSPROBES),
227228
SNMP_MIB_ITEM("TCPRenoRecoveryFail", LINUX_MIB_TCPRENORECOVERYFAIL),
228229
SNMP_MIB_ITEM("TCPSackRecoveryFail", LINUX_MIB_TCPSACKRECOVERYFAIL),
229230
SNMP_MIB_ITEM("TCPSchedulerFailed", LINUX_MIB_TCPSCHEDULERFAILED),

net/ipv4/sysctl_net_ipv4.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828

2929
static int zero;
3030
static int one = 1;
31-
static int two = 2;
31+
static int four = 4;
3232
static int tcp_retr1_max = 255;
3333
static int ip_local_port_range_min[] = { 1, 1 };
3434
static int ip_local_port_range_max[] = { 65535, 65535 };
@@ -760,7 +760,7 @@ static struct ctl_table ipv4_table[] = {
760760
.mode = 0644,
761761
.proc_handler = proc_dointvec_minmax,
762762
.extra1 = &zero,
763-
.extra2 = &two,
763+
.extra2 = &four,
764764
},
765765
{
766766
.procname = "udp_mem",

net/ipv4/tcp_input.c

Lines changed: 15 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@ int sysctl_tcp_frto_response __read_mostly;
9898
int sysctl_tcp_thin_dupack __read_mostly;
9999

100100
int sysctl_tcp_moderate_rcvbuf __read_mostly = 1;
101-
int sysctl_tcp_early_retrans __read_mostly = 2;
101+
int sysctl_tcp_early_retrans __read_mostly = 3;
102102

103103
#define FLAG_DATA 0x01 /* Incoming frame contained data. */
104104
#define FLAG_WIN_UPDATE 0x02 /* Incoming ACK was a window update. */
@@ -2150,15 +2150,16 @@ static bool tcp_pause_early_retransmit(struct sock *sk, int flag)
21502150
* max(RTT/4, 2msec) unless ack has ECE mark, no RTT samples
21512151
* available, or RTO is scheduled to fire first.
21522152
*/
2153-
if (sysctl_tcp_early_retrans < 2 || (flag & FLAG_ECE) || !tp->srtt)
2153+
if (sysctl_tcp_early_retrans < 2 || sysctl_tcp_early_retrans > 3 ||
2154+
(flag & FLAG_ECE) || !tp->srtt)
21542155
return false;
21552156

21562157
delay = max_t(unsigned long, (tp->srtt >> 5), msecs_to_jiffies(2));
21572158
if (!time_after(inet_csk(sk)->icsk_timeout, (jiffies + delay)))
21582159
return false;
21592160

2160-
inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS, delay, TCP_RTO_MAX);
2161-
tp->early_retrans_delayed = 1;
2161+
inet_csk_reset_xmit_timer(sk, ICSK_TIME_EARLY_RETRANS, delay,
2162+
TCP_RTO_MAX);
21622163
return true;
21632164
}
21642165

@@ -2321,7 +2322,7 @@ static bool tcp_time_to_recover(struct sock *sk, int flag)
23212322
* interval if appropriate.
23222323
*/
23232324
if (tp->do_early_retrans && !tp->retrans_out && tp->sacked_out &&
2324-
(tp->packets_out == (tp->sacked_out + 1) && tp->packets_out < 4) &&
2325+
(tp->packets_out >= (tp->sacked_out + 1) && tp->packets_out < 4) &&
23252326
!tcp_may_send_now(sk))
23262327
return !tcp_pause_early_retransmit(sk, flag);
23272328

@@ -3081,6 +3082,7 @@ static void tcp_cong_avoid(struct sock *sk, u32 ack, u32 in_flight)
30813082
*/
30823083
void tcp_rearm_rto(struct sock *sk)
30833084
{
3085+
const struct inet_connection_sock *icsk = inet_csk(sk);
30843086
struct tcp_sock *tp = tcp_sk(sk);
30853087

30863088
/* If the retrans timer is currently being used by Fast Open
@@ -3094,20 +3096,20 @@ void tcp_rearm_rto(struct sock *sk)
30943096
} else {
30953097
u32 rto = inet_csk(sk)->icsk_rto;
30963098
/* Offset the time elapsed after installing regular RTO */
3097-
if (tp->early_retrans_delayed) {
3099+
if (icsk->icsk_pending == ICSK_TIME_EARLY_RETRANS ||
3100+
icsk->icsk_pending == ICSK_TIME_LOSS_PROBE) {
30983101
struct sk_buff *skb = tcp_write_queue_head(sk);
30993102
const u32 rto_time_stamp = TCP_SKB_CB(skb)->when + rto;
31003103
s32 delta = (s32)(rto_time_stamp - tcp_time_stamp);
31013104
/* delta may not be positive if the socket is locked
3102-
* when the delayed ER timer fires and is rescheduled.
3105+
* when the retrans timer fires and is rescheduled.
31033106
*/
31043107
if (delta > 0)
31053108
rto = delta;
31063109
}
31073110
inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS, rto,
31083111
TCP_RTO_MAX);
31093112
}
3110-
tp->early_retrans_delayed = 0;
31113113
}
31123114

31133115
/* This function is called when the delayed ER timer fires. TCP enters
@@ -3601,7 +3603,8 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
36013603
if (after(ack, tp->snd_nxt))
36023604
goto invalid_ack;
36033605

3604-
if (tp->early_retrans_delayed)
3606+
if (icsk->icsk_pending == ICSK_TIME_EARLY_RETRANS ||
3607+
icsk->icsk_pending == ICSK_TIME_LOSS_PROBE)
36053608
tcp_rearm_rto(sk);
36063609

36073610
if (after(ack, prior_snd_una))
@@ -3678,6 +3681,9 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
36783681
if (dst)
36793682
dst_confirm(dst);
36803683
}
3684+
3685+
if (icsk->icsk_pending == ICSK_TIME_RETRANS)
3686+
tcp_schedule_loss_probe(sk);
36813687
return 1;
36823688

36833689
no_queue:

net/ipv4/tcp_ipv4.c

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2703,7 +2703,9 @@ static void get_tcp4_sock(struct sock *sk, struct seq_file *f, int i, int *len)
27032703
__u16 srcp = ntohs(inet->inet_sport);
27042704
int rx_queue;
27052705

2706-
if (icsk->icsk_pending == ICSK_TIME_RETRANS) {
2706+
if (icsk->icsk_pending == ICSK_TIME_RETRANS ||
2707+
icsk->icsk_pending == ICSK_TIME_EARLY_RETRANS ||
2708+
icsk->icsk_pending == ICSK_TIME_LOSS_PROBE) {
27072709
timer_active = 1;
27082710
timer_expires = icsk->icsk_timeout;
27092711
} else if (icsk->icsk_pending == ICSK_TIME_PROBE0) {

net/ipv4/tcp_output.c

Lines changed: 124 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -74,6 +74,7 @@ static bool tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle,
7474
/* Account for new data that has been sent to the network. */
7575
static void tcp_event_new_data_sent(struct sock *sk, const struct sk_buff *skb)
7676
{
77+
struct inet_connection_sock *icsk = inet_csk(sk);
7778
struct tcp_sock *tp = tcp_sk(sk);
7879
unsigned int prior_packets = tp->packets_out;
7980

@@ -85,7 +86,8 @@ static void tcp_event_new_data_sent(struct sock *sk, const struct sk_buff *skb)
8586
tp->frto_counter = 3;
8687

8788
tp->packets_out += tcp_skb_pcount(skb);
88-
if (!prior_packets || tp->early_retrans_delayed)
89+
if (!prior_packets || icsk->icsk_pending == ICSK_TIME_EARLY_RETRANS ||
90+
icsk->icsk_pending == ICSK_TIME_LOSS_PROBE)
8991
tcp_rearm_rto(sk);
9092
}
9193

@@ -1959,6 +1961,9 @@ static int tcp_mtu_probe(struct sock *sk)
19591961
* snd_up-64k-mss .. snd_up cannot be large. However, taking into
19601962
* account rare use of URG, this is not a big flaw.
19611963
*
1964+
* Send at most one packet when push_one > 0. Temporarily ignore
1965+
* cwnd limit to force at most one packet out when push_one == 2.
1966+
19621967
* Returns true, if no segments are in flight and we have queued segments,
19631968
* but cannot send anything now because of SWS or another problem.
19641969
*/
@@ -1994,8 +1999,13 @@ static bool tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle,
19941999
goto repair; /* Skip network transmission */
19952000

19962001
cwnd_quota = tcp_cwnd_test(tp, skb);
1997-
if (!cwnd_quota)
1998-
break;
2002+
if (!cwnd_quota) {
2003+
if (push_one == 2)
2004+
/* Force out a loss probe pkt. */
2005+
cwnd_quota = 1;
2006+
else
2007+
break;
2008+
}
19992009

20002010
if (unlikely(!tcp_snd_wnd_test(tp, skb, mss_now)))
20012011
break;
@@ -2049,10 +2059,120 @@ static bool tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle,
20492059
if (likely(sent_pkts)) {
20502060
if (tcp_in_cwnd_reduction(sk))
20512061
tp->prr_out += sent_pkts;
2062+
2063+
/* Send one loss probe per tail loss episode. */
2064+
if (push_one != 2)
2065+
tcp_schedule_loss_probe(sk);
20522066
tcp_cwnd_validate(sk);
20532067
return false;
20542068
}
2055-
return !tp->packets_out && tcp_send_head(sk);
2069+
return (push_one == 2) || (!tp->packets_out && tcp_send_head(sk));
2070+
}
2071+
2072+
bool tcp_schedule_loss_probe(struct sock *sk)
2073+
{
2074+
struct inet_connection_sock *icsk = inet_csk(sk);
2075+
struct tcp_sock *tp = tcp_sk(sk);
2076+
u32 timeout, tlp_time_stamp, rto_time_stamp;
2077+
u32 rtt = tp->srtt >> 3;
2078+
2079+
if (WARN_ON(icsk->icsk_pending == ICSK_TIME_EARLY_RETRANS))
2080+
return false;
2081+
/* No consecutive loss probes. */
2082+
if (WARN_ON(icsk->icsk_pending == ICSK_TIME_LOSS_PROBE)) {
2083+
tcp_rearm_rto(sk);
2084+
return false;
2085+
}
2086+
/* Don't do any loss probe on a Fast Open connection before 3WHS
2087+
* finishes.
2088+
*/
2089+
if (sk->sk_state == TCP_SYN_RECV)
2090+
return false;
2091+
2092+
/* TLP is only scheduled when next timer event is RTO. */
2093+
if (icsk->icsk_pending != ICSK_TIME_RETRANS)
2094+
return false;
2095+
2096+
/* Schedule a loss probe in 2*RTT for SACK capable connections
2097+
* in Open state, that are either limited by cwnd or application.
2098+
*/
2099+
if (sysctl_tcp_early_retrans < 3 || !rtt || !tp->packets_out ||
2100+
!tcp_is_sack(tp) || inet_csk(sk)->icsk_ca_state != TCP_CA_Open)
2101+
return false;
2102+
2103+
if ((tp->snd_cwnd > tcp_packets_in_flight(tp)) &&
2104+
tcp_send_head(sk))
2105+
return false;
2106+
2107+
/* Probe timeout is at least 1.5*rtt + TCP_DELACK_MAX to account
2108+
* for delayed ack when there's one outstanding packet.
2109+
*/
2110+
timeout = rtt << 1;
2111+
if (tp->packets_out == 1)
2112+
timeout = max_t(u32, timeout,
2113+
(rtt + (rtt >> 1) + TCP_DELACK_MAX));
2114+
timeout = max_t(u32, timeout, msecs_to_jiffies(10));
2115+
2116+
/* If RTO is shorter, just schedule TLP in its place. */
2117+
tlp_time_stamp = tcp_time_stamp + timeout;
2118+
rto_time_stamp = (u32)inet_csk(sk)->icsk_timeout;
2119+
if ((s32)(tlp_time_stamp - rto_time_stamp) > 0) {
2120+
s32 delta = rto_time_stamp - tcp_time_stamp;
2121+
if (delta > 0)
2122+
timeout = delta;
2123+
}
2124+
2125+
inet_csk_reset_xmit_timer(sk, ICSK_TIME_LOSS_PROBE, timeout,
2126+
TCP_RTO_MAX);
2127+
return true;
2128+
}
2129+
2130+
/* When probe timeout (PTO) fires, send a new segment if one exists, else
2131+
* retransmit the last segment.
2132+
*/
2133+
void tcp_send_loss_probe(struct sock *sk)
2134+
{
2135+
struct sk_buff *skb;
2136+
int pcount;
2137+
int mss = tcp_current_mss(sk);
2138+
int err = -1;
2139+
2140+
if (tcp_send_head(sk) != NULL) {
2141+
err = tcp_write_xmit(sk, mss, TCP_NAGLE_OFF, 2, GFP_ATOMIC);
2142+
goto rearm_timer;
2143+
}
2144+
2145+
/* Retransmit last segment. */
2146+
skb = tcp_write_queue_tail(sk);
2147+
if (WARN_ON(!skb))
2148+
goto rearm_timer;
2149+
2150+
pcount = tcp_skb_pcount(skb);
2151+
if (WARN_ON(!pcount))
2152+
goto rearm_timer;
2153+
2154+
if ((pcount > 1) && (skb->len > (pcount - 1) * mss)) {
2155+
if (unlikely(tcp_fragment(sk, skb, (pcount - 1) * mss, mss)))
2156+
goto rearm_timer;
2157+
skb = tcp_write_queue_tail(sk);
2158+
}
2159+
2160+
if (WARN_ON(!skb || !tcp_skb_pcount(skb)))
2161+
goto rearm_timer;
2162+
2163+
/* Probe with zero data doesn't trigger fast recovery. */
2164+
if (skb->len > 0)
2165+
err = __tcp_retransmit_skb(sk, skb);
2166+
2167+
rearm_timer:
2168+
inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS,
2169+
inet_csk(sk)->icsk_rto,
2170+
TCP_RTO_MAX);
2171+
2172+
if (likely(!err))
2173+
NET_INC_STATS_BH(sock_net(sk),
2174+
LINUX_MIB_TCPLOSSPROBES);
2175+
return;
20562176
}
20572177

20582178
/* Push out any pending frames which were held back due to

0 commit comments

Comments
 (0)