Notes From Usenix Nsdi Days Ii & 3

Before talking almost Days 2 & 3, here are my notes from NSDI 24-hour interval 1 if y'all are interested.

Again all of the papers mentioned (and omitted) below are accessible publicly at the NSDI spider web page. Enjoy!

NSDI 24-hour interval 2 

Day 2 had the next sessions:

  • web together with video
  • performance isolation together with scaling
  • congestion control
  • cloud

Here are the papers I took detailed notes on. I ordered them yesteryear how interesting I flora them.

Towards Battery-Free hard disk Video Streaming

This newspaper is yesteryear Saman Naderiparizi, Mehrdad Hessar, Vamsi Talla, Shyamnath Gollakota, together with Joshua R Smith, University of Washington.

This piece of work was hear blowing for me. This is a rigid grouping that has developed the Wisp motes before, but fifty-fifty hence the presented battery-free video streaming sensors was real impressive together with the utter included a alive demo of them.

So hither is the deal. The destination of this piece of work is to blueprint sticker shape factor, battery-free cameras. But that is crazy right? Video streaming is mightiness hungry, how tin y'all acquire battery-free?

The newspaper looks closely at the costs of the components of a video streaming camera. It turns out the ikon sensor, at 85microWatt, is real depression power. On the other paw the radio at 100milliWatt is real high power.

If nosotros could solely offload the chore of communication away from the sensor, nosotros tin draw this off!

The inspiration for the persuasion comes from the Russian swell seal bug. Inside the swell seal was a real sparse membrane that vibrated when at that spot is talking. Then a directional remote radio was used to have those analog noise together with reconstruct dissonance yesteryear the Russian spies. The next is from the Wikipedia page on this seal bug, called "the Thing":
The "Thing" consisted of a tiny capacitive membrane connected to a modest quarter-wavelength antenna; it had no mightiness render or active electronic components. The device, a passive cavity resonator, became active solely when a radio signal of the right frequency was sent to the device from an external transmitter. Sound waves (from voices within the ambassador's office) passed through the sparse forest case, striking the membrane together with causing it to vibrate. The movement of the membrane varied the capacitance "seen" yesteryear the antenna, which inward plough modulated the radio waves that struck together with were re-transmitted yesteryear the Thing. A receiver demodulated the signal hence that audio picked upwards yesteryear the microphone could endure heard, only every bit an ordinary radio receiver demodulates radio signals together with outputs sound.
The grouping applies the same regulation to cameras to brand them battery-free. They showed on the stage the showtime demo of analog video backscatter that sends pixels straight to the antenna. The hit for the epitome of the arrangement was 27 feet, together with it streamed 112*112 resolution of real-time video. A software defined radio was used to recreate the backscattered analog video.

For the analog hardware, the speakers said they got inspiration from human encephalon signals together with created pulse width modulated pixels. More technology scientific discipline went into performing intra-frame compression leveraging that the next pixels are fairly similar.

Vesper: Measuring Time-to-Interactivity for Web Pages

This newspaper is by  Ravi Netravali together with Vikram Nathan, MIT CSAIL; James Mickens, Harvard University; Hari Balakrishnan, MIT CSAIL.

This piece of work asks the inquiry of "what does it hateful for a page to charge quickly?" In other words, how should nosotros define the charge time?

The way page loads piece of work likes this. The url typed inward browser invokes the server to shipping the html. The browser runs html + javascript (requesting other embedded items every bit it encounters them inward the html, but that is the theme of Ravi together with James's other newspaper inward the session). In the browser at that spot is a javascript engine together with a rendering engine which constructs the DOM tree. The js engine together with dom tree interact via dom api.

Often the page charge metric used is the page charge time. Of course, this is real conservative, because solely some of the page content is right away visible, the invisible portion doesn't matter, hence why attention almost the charge fourth dimension of the invisible parts? Making this observation, Google came upwards alongside speed index: fourth dimension to homecoming above-the-fold, i.e., inward the visible portion of the browser.

But the speed index is also deficient because it doesn't utter almost the js code running time. The js code running fourth dimension affects the user experience, say via autocomplete, etc. An interactive page is non usable without js, together with today a large fraction of the pages are interactive. The utter gave some statistics almost median 182 handlers, together with 95th percentile 1252 handlers inward the pages surveyed.

To improve on the speed index, this piece of work comes upwards alongside the gear upwards index, which is defined every bit page time-to-interactivity inward price of visibility together with functionality.

But the challenge is, nobody knows a practiced way to automatically position the interactive state: are the coffee scripts working yet?

The system, Vesper, uses a 2 stage approach

  • identify visible elements trial handlers, together with reason handlers access when fired
  • track loading progress of interactive reason from stage 1

As a side note, the speaker, Ravi, spoke inward a relaxed together with clear way. The utter didn't experience rushed, but covered a lot of stuff. I actually loved the delivery.

Performance Analysis of Cloud Applications

This newspaper is yesteryear Dan Ardelean, Amer Diwan, together with Chandra Erdman, Google.

This piece of work from Google considers the inquiry of how nosotros evaluate a modify earlier nosotros deploy it inward production? Yes, the accepted approach is to occupation A/B testing over a modest fraction of the users, but is that enough?

The abstract has this to say:
"Many pop cloud applications are large-scale distributed systems alongside each asking involving tens to thousands of RPCs together with large code bases. Because of their scale, performance optimizations without actionable supporting information are probable to endure ineffective: they volition add together complexity to an already complex arrangement ofttimes without jeopardy of a benefit. This newspaper describes the challenges inward collecting actionable information for Gmail, a service alongside to a greater extent than than 1 billion active accounts.
Using production information from Gmail nosotros present that both the charge together with the nature of the charge changes continuously. This makes Gmail performance hard to model alongside a synthetic examination together with hard to analyze inward production. We draw ii techniques for collecting actionable information from a production system. First, coordinated bursty tracing allows us to capture bursts of events across all layers of our stack simultaneously. Second, vertical context injection enables us combine high-level events alongside low-level events inward a holistic delineate without requiring us to explicitly propagate this information across our software stack."
The vertical context injection roughly way collecting the delineate at the heart together with soul level, using ftrace, where the layers inward a higher house the heart together with soul injects information into the heart together with soul via stylized syscalls alongside payload.

The newspaper concludes alongside this observations. For meaningful performance experiments:

  • do experiments inward production
  • use controlled A-B tests alongside 10 millions of users (less is non real meaningful)
  • use long-lived tests to capture the changing mix of requests
  • use creative approaches (vertical context injections) for collecting rich information cheaply.

LHD: Improving Cache Hit Rate yesteryear Maximizing Hit Density

This newspaper is yesteryear Nathan Beckmann, Carnegie Mellon University; Haoxian Chen, University of Pennsylvania; Asaf Cidon, Stanford University together with Barracuda Networks.

Who knew... Cache eviction policies nevertheless require piece of work together with y'all tin accomplish large improvements there.

To motivate the importance of the cache hitting charge per unit of measurement research, Asaf mentioned the following. The  key-value cache is 100x faster than database. For Facebook if y'all tin improve its cache hitting charge per unit of measurement of 98%, yesteryear only some other additional 1%, the performance would improve 35%.

Here is the abstract:
Cloud application performance is heavily reliant on the hitting charge per unit of measurement of datacenter key-value caches. Key-value caches typically occupation to the lowest degree lately used (LRU) every bit their eviction policy, but LRU’s hitting charge per unit of measurement is far from optimal nether existent workloads. Prior inquiry has proposed many eviction policies that improve on LRU, but these policies brand restrictive assumptions that wound their hitting rate, together with they tin endure hard to implement efficiently.
We innovate to the lowest degree hitting density (LHD), a novel eviction policy for key-value caches. LHD predicts each object’s expected hits-per-space-consumed (hit density), filtering objects that contribute piffling to the cache’s hitting rate. Unlike prior eviction policies, LHD does non rely on heuristics, but rather rigorously models objects’ deportment using conditional probability to adapt its deportment inward existent time.
To brand LHD practical, nosotros blueprint together with implement RankCache, an efficient key-value cache based on memcached. We evaluate RankCache together with LHD on commercial memcached together with company storage traces, where LHD consistently achieves improve hitting rates than prior policies. LHD requires much less infinite than prior policies to stand upwards for their hitting rate, on average 8x less than LRU together with 2–3x less than lately proposed policies. Moreover, RankCache requires no synchronization inward the mutual case, improving asking throughput at sixteen threads yesteryear 8x over LRU together with yesteryear 2x over CLOCK.

Poster session

There was a poster session at the halt of 24-hour interval 2. I want at that spot were to a greater extent than of the preliminary but bold piece of work inward the poster session, because most of the posters were only a poster accompanying the presented newspaper inward the principal track.

I liked these ii posters the most. They are both real interesting industrial plant inward progress.
  • Distributed Test Case Generation using Model Inference. Stewart Grant together with Ivan Beschastnikh, University of British Columbia
  • High Performance together with Usable RPCs for Datacenter Fabrics. Anuj Kalia, Carnegie Mellon University; Michael Kaminsky, Intel Labs; David G. Andersen, Carnegie Mellon University 

NSDI Day 3

The 24-hour interval iii had these sessions:

  • Network monitoring together with diagnosis
  • Fault-Tolerance
  • Physical Layer
  • Configuration Management

Since the sessions were networking specific, together with since I was tired of the firehose of information spewed at me inward the showtime 2 days, I didn't convey much notes on 24-hour interval 3.  So I volition only include my notes on the Plover newspaper from the fault-tolerance session.

PLOVER: Fast, Multi-core Scalable Virtual Machine Fault-tolerance

This newspaper is yesteryear Cheng Wang, Xusheng Chen, Weiwei Jia, Boxuan Li, Haoran Qiu, Shixiong Zhao, together with Heming Cui, The University of Hong Kong.

This piece of work builds on the Remus newspaper which appeared inward NSDI08 together with received a test-of-time honor this twelvemonth at NSDI.

The ii limitations of the REMUS primary/backup VM replication approach was that:

  • too many retention pages needs to endure copied together with transferred, and
  • a dissever encephalon is possible due to partitioning.

This work, Plover, uses Paxos to address these problems. Paxos helps alongside both problems. By using iii nodes, it doesn't endure from the dissever encephalon the primary-backup approach suffers. By totally-ordering the requests seen yesteryear replicas, it tin avoid copying retention pages. Replicas executing the same sequence of inputs should possess got the same reason --- well, of course, assuming deterministic execution that is.

The drawback alongside Paxos is: it cannot direct keep non-determinism inward asking execution. To create this, Plover invokes the VM page synchronization periodically earlier releasing replies.

The practiced intelligence is that using Paxos to totally-order requests makes most retention pages same: the newspaper reports 97% of the pages beingness the same. So the VM synchronization is lightweight because it solely needs to convey attention of the remaining 3% pages.

Plover is available on github.

I wonder, since Plover already does VM synchronization, does it actually demand to occupation a 100% totally-ordered requests delivered to the replicas via Paxos? Would it endure possible to occupation a relaxed but faster solution? The Tapir projection explored relaxed ordering of operations for storage systems, together with some of the lessons may endure applicable here.

MAD questions

Ok the MAD questions today picks upwards the thread from the final time. How do y'all improve the conference experience? Are conferences cramming to many technical sessions inward a day? What tin endure done differently to improve the interactivity together with networking of the conference participants?

A major argue I acquire to conferences is to run across people doing interesting piece of work together with converse alongside them, acquire improve almost their perspectives together with thought-processes.

During the iii days at NSDI, I possess got talked alongside xiv people. That is non a bad publish for me. I am non from the networking together with NSDI community, hence I don't know most people there. I acquire to a greater extent than chances to interact alongside people if I acquire to a conference where I know to a greater extent than people. Unfortunately, since I kept switching inquiry areas (theoretical distributed systems 98-00, wireless sensor networks 00-10, smartphones/crowdsourcing 10-13, cloud computing 13-18) I don't possess got a habitation conference, where I know most people.

Out of these xiv people, I solely knew iii of them before. From the remaining, I knew a duo of them from interacting on Twitter, but the remaining bulk were cold-hello first-time interactions.

The cold-hello interactions are hard, together with every bit an introvert together with shy somebody (except when I am curious) I had to strength myself to possess got these first-time interactions. I assume the people I approach are also interested inward talking to people (that is what conferences are supposed to endure about), together with nosotros tin possess got overnice interesting conversations since nosotros possess got some shared involvement on distributed systems together with at to the lowest degree on research. I would say 75% of the conversations I had  were interesting together with not-superficial. But sometimes it bombs together with that gets awkward. And instead of beingness happy almost the overnice interactions y'all have, it is tardily to focus on the awkward ones together with experience bad almost them.

Although I am happy alongside coming together xiv interesting people, this is hence much lower than the people I run across together with utter alongside at HPTS. If y'all human face at my posts almost HPTS, y'all tin consider that I made it a indicate to emphasize how much I enjoyed the interactivity of HPTS.

I think a major way HPTS makes this come about is it sets the intentions clear together with states this explicitly the showtime day. Pat Helland takes the stage together with says that "the indicate of HPTS is to run across other people together with interact, together with the sessions are only a intermission from meeting/networking alongside other people". Since HPTS makes the cold-hello the norm, it does non experience weird anymore. I never had an awkward conversation at HPTS.

I am certain at that spot are many ways to build interactivity together with networking inward the conferences. Why don't nosotros brand the posters session a long session inward the afternoon, rather than afterward 6pm? Are at that spot whatsoever ice-breaker activities that the conferences tin adapt? I scream back that at an activity alongside twenty people, the moderator asked everyone to say something uniquely quirky almost themselves. That broke the H2O ice pretty quickly; I nevertheless scream back some of the quirks people mentioned. Maybe to scale for larger groups, it may endure possible to possess got open-mike crazy ideas, unsafe ideas, together with hot-takes sessions. Maybe nosotros demand to acquire some professionals to help, say from improv comedy people or capable trial ice-breaker people. (I assume large companies similar Google should possess got skilled HR people to aid tech people interact better, right?)

Ok, this tin acquire long, together with I am non actually knowledgeable inward this area, hence I volition halt here. But hither is my ask. Next fourth dimension if y'all consider me at a conference, delight approach me together with say hello. I am certain nosotros volition possess got a overnice conversation together with nosotros volition possess got things to acquire from each other.

Maybe next fourth dimension I should brand a badge to brand this explicit: "Please say Hi to me, I dear to run across together with utter to you."

0 Response to "Notes From Usenix Nsdi Days Ii & 3"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel