Tagged: acm Toggle Comment Threads | Keyboard Shortcuts

  • joly 9:55 am on 05/15/2013 Permalink | Reply
    Tags: acm, dc acm, ,   

    VIDEO: Vint Cerf – Reinventing the Internet #vintcerf @dcacm 

    On Monday May 13 2013, Vint Cerf addressed the Washington DC Chapter of the Association for Computing Machinery (DC ACM) on the topic ‘Reinventing the Internet‘. Dr. Cerf is the ACM’s current president and, in 2004, a winner of its prestigious Turing Award. The presentation focused on the evolution of the Internet from its beginning in 1973 to its current state and the need for updates to its architecture. It was webcast live by the Internet Society’s North America Bureau, video is below.

    View on YouTube: http://youtu.be/qguED5Aouv4
    Transcribe on AMARA: http://www.amara.org/en/videos/IkM193LpXsEm/
    Twitter: #vintcerf | @dcacm

    The Internet was designed 40 years ago and has been in operation for 30 years. It has evolved considerably but its architecture is still pretty much as it was in its 1973 incarnation. We have learned a great deal about the applications of the Internet in the intervening decades and it is clear that there is room for improvement and expansion in several dimensions. The Internet of Things is rapidly emerging; mobiles are everywhere; the interplanetary internet is in nascent operation between the Earth and Mars. Security has become a major issue as have authentication and integrity. These topics form the core of the presentation at DC ACM.

     
  • joly 2:17 am on 12/13/2011 Permalink | Reply
    Tags: acm, bufferbloat,   

    ACM Discussion – BufferBloat: What’s Wrong with the Internet? #bufferbloat @ACMQueue 

    ACMIn an ACM discussion BufferBloat: What’s Wrong with the Internet? (pdf) TCP experts Vint Cerf, Van Jacobson, Nick Weaver, and Jim Gettys discuss the growing problem of clogged networks.

    From the preamble:

    Bufferbloat refers to excess buffering inside a network, resulting in high latency and reduced throughput. Some buffering is needed; it provides space to queue packets waiting for transmission, thus minimizing data loss. In the past, the high cost of memory kept buffers fairly small, so they filled quickly and packets began to drop shortly after the link became saturated, signaling to the communications protocol the presence of congestion and thus the need for compensating adjustments.

    Because memory now is significantly cheaper than it used to be, buffering has been overdone in all manner of network devices, without consideration for the consequences. Manufacturers have reflexively acted to prevent any and all packet loss and, by doing so, have inadvertently defeated a critical TCP congestion-detection mechanism, with the result being worsened congestion and increased latency.

    Now that the problem has been diagnosed, people are working feverishly to fix it. This case study considers the extent of the bufferbloat problem and its potential implications.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel