[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Last Call: Security Architecture for the Internet Protocol to Proposed Standard



Gee, what a great opportunity to guess how TCP works in front of experts,
but:

I thought the problem with advertising windows that are "too large" is that
(at least some) TCPs keep trying to probe unsuccessfully into the "too
large" region.

Doubling the number of segments every round trip does stop after the first
few round trips, but I thought TCPs continued to increase the number of
segments sent by one every round trip after that, until the sender starts
missing ACKs.

This allows TCPs to "speed up" when cross-traffic stops; when the FTP over
(at least part of) your path stops, your TCP throughput will expand to fill
the capacity left.

I'm not questioning Peter's math, but it sure sounds like his TCP doesn't
probe the way I thought TCPs do.

And, on an related topic, I thought the problem with source quench was that
IPsec hasn't been deployed in sufficient volume to prevent massive denial of
service attacks (don't like someone? tell them to shut up). 

(and this is why I left IPsec on the cc: list)

You can flame me in person tomorrow, if you're in LA...

Spencer

> ----------
> From: 	Peter Warren[SMTP:pwarren@gte.com]
> Sent: 	Thursday, April 02, 1998 1:16 PM
> To: 	Phil Karn
> Cc: 	travis@clark.net; huitema@bellcore.com; smb@research.att.com;
> ablair@EROLS.COM; sommerfeld@orchard.arlington.ma.us; ipsec@tis.com;
> tcp-over-satellite@achtung.sp.trw.com
> Subject: 	Re: Last Call: Security Architecture for the Internet
> Protocol to  Proposed Standard
> 
> [Phil Karn wrote:]
> >One problem we do have with the existing TCP congestion control
> >mechanism is that the sender will increase its congestion window all
> >the way up to the offered window as long as no packets are lost, even
> >if all those extra packets just pile up in a queue at the bottleneck
> >router. Various ad-hoc methods involving real-time bandwidth/delay
> >measurements have been tried to solve this problem, but they don't
> >seem to work really well because of measurement noise (competing
> >traffic, route changes, etc).
> 
> It seems to me that, since the sender uses incoming ACKs to clock out its
> data packets, the rate of ACKs from the receiver will act as a break on
> the
> data steam, no matter how big the effective TCP window is.
> 
> I have seen this in operation during a bulk transfer over an ADSL link
> (1.5Mbps/64Kbps) where the receiver is set to advertize a 24KByte window
> and the RTT is about 45 msec.  In this case, the downstream ADSL modem
> buffer is the bottleneck. (Here, the window is set wide enough to allow a
> downstream rate of over 4Mbps, so the limiting factor is the downstream
> ADSL link rate of 1.5Mbps). By looking at packet arrival times at the ADSL
> modem vs. those at the receiver, I see that the downstream ADSL modem's
> queue starts to fill up fairly quickly. However, at a certain point (one
> second into the transfer), the increase stops, and the queue level drops
> gradually at a constant rate until the end of the transfer. And I've
> verified that *no* packets have been dropped, in either direction, and
> there are no retransmissions nor duplicate ACKs. So the server did not
> enter congestion avoidance nor slow-start at any point. I infer that it is
> using the rate of returning ACKs to adjust its data rate. Am I overlooking
> anything?
> 
> If this is true, then it seems it would do no harm for the receiver to
> advertize the largest window possible, by default, rather than the paultry
> 8760 bytes we see commonly. I fully agree with Phil's comment:
> 
> > The receiver should not have
> >to artificially limit its window because of network considerations --
> >that's the function of congestion control on the transmitting end.