Figure 2: Software architecture of an NFS Tee. To minimize potential impact on clients, we separate the relaying functionality from the other
three primary Tee functions (which contain the vast majority of the code). One or more NFS plug-ins can be dynamically initiated to compare a
SUT to the reference server with which clients are interacting.
4 A NFSv3 Tee
This section describes the design and implementation of
an NFSv3 Tee. It describes how components performing
the four primary Tee tasks are organized and explains
the architecture in terms of our design goals. It details
nuanced aspects of state synchronization and response
comparison, including some performance enhancements.
4.1 Goals and architecture
Our NFSv3 Tee’s architecture is driven by five design
goals. First, we want to be able to use the Tee in live environments,
which makes the reliability of the relay task
crucial. Second, we want to be able to dynamically add a
SUT and initiate comparison-based verification in a live
environment.4 Third, we want the Tee to operate using
reasonable amounts of machine resources, which pushes
us to minimize runtime state and perform complex comparisons
off-line in a post-processor. Fourth, we are more
concerned with achieving a functioning, robust Tee than
with performance, which guides us to have the Tee run
as application-level software, acting as an active intermediary.
Fifth, we want the comparison module to be
flexible so that a user can customize of the rules to increase
efficiency in the face of server idiosyncrasies that
are understood.
Figure 2 illustrates the software architecture of our
NFSv3 Tee, which includes modules for the four primary
tasks. The four modules are partitioned into two
4On a SUT running developmental software, developers may wish
to make code changes, recompile, and restart the server repeatedly.
processes. One process relays communication between
clients and the reference server. The other process (a
“plug-in”) performs the three tasks that involve interaction
with the SUT. The relay process exports RPC requests
and responses to the plug-in process via a queue
stored in shared memory. This two-process organization
was driven by the first two design goals: (1) running the
relay as a separate process isolates it from faults in the
plug-in components, which make up the vast majority
of the Tee code; (2) plug-ins can be started and stopped
without stopping client interactions with the reference
server.
When a plug-in is started, it attaches to the shared memory
and begins its three modules. The synchronization
module begins reading files and directories from the reference
server and writing them to the SUT. As it does so,
it stores reference server-to-SUT file handle mappings.
The duplication module examines each RPC request exported
by the relay and determines whether the relevant
SUT objects are synchronized. If so, an appropriate request
for the SUT is constructed. For most requests, this
simply involves mapping the file handles. The SUT’s response
is passed to the comparison module, which compares
it against the reference server’s response.
Full comparison consists of two steps: a configurable
on-line step and an off-line step. For each mismatch
found in the on-line step, the request and both responses
are logged for off-line analysis. The on-line comparison
rules are specified in a configuration file that describes
how each response field should be compared.
Off-line post-processing prunes the log of non-matching
responses that do not represent true discrepancies (e.g.,
directory entries returned in different orders), and then
assists the user with visualizing the “problem” RPCs.
Off-line post-processing is useful for reducing on-line
overheads as well as allowing the user to refine comparison
rules without losing data from the real environment
(since the log is a filtered trace).
4.2 State synchronization
The synchronization module updates the SUT to enable
useful comparisons. Doing so requires making the SUT’s
internal state match the reference server’s to the point
that the two servers’ responses to a given RPC could be
expected to match. Fortunately, NFSv3 RPCs generally
manipulate only one or two file objects (regular files, directories,
or links), so some useful comparisons can be
made long before the entire file system is copied to the
reference server.
Synchronizing an object requires establishing a point
within the stream of requests where comparison could
begin. Then, as long as RPCs affecting that object are
handled in the same order by both servers, it will remain
synchronized. The lifetime of an object can be viewed
as a sequence of states, each representing the object as it
exists between two modifications. Synchronizing an object,
then, amounts to replicating one such state from the
reference server to the SUT.
Performing synchronization offline (i.e., when the reference
server is not being used by any clients) would
be straightforward. But, one of our goals is the ability
to insert a SUT into a live environment at runtime.
This requires dealing with object changes that are concurrent
with the synchronization process. The desire not
to disrupt client activity precludes blocking requests to
an object that is being synchronized. The simplest solution
would be to restart synchronization of an object if a
modification RPC is sent to the reference server before it
completes. But, this could lead to unacceptably slow and
inefficient synchronization of large, frequently-modified
objects. Instead, our synchronization mechanism tracks
changes to objects that are being synchronized. RPCs are
sent to the reference server as usual, but are also saved in
a changeset for later replay against the SUT.
Figure 3 illustrates synchronization in the presence of
write concurrency. The state S1 is first copied from the
reference server to the SUT. While this copy is taking
place, a write (Wr1) arrives and is sent to the reference
server. Wr1 is not duplicated to the SUT until the copy of
S1 completes. Instead, it is recorded at the Tee. When the
copy of S1 completes, a new write, Wr1’, is constructed
based on Wr1 and sent to the SUT. Since no further concurrent
changes need to be replayed, the object is marked
reference
object
lifetime
SUT
object
lifetime
S2
Wr1
Copy S1
S1
Wr1’
Time
S1 S2
Figure 3: Synchronization with a concurrent write. The top
series of states depicts a part of the lifetime of an object on the reference
server. The bottom series of states depicts the corresponding object on
the SUT. Horizontal arrows are requests executed on a server (reference
or SUT), and diagonal arrows are full object copies. Synchronization
begins with copying state S1 onto the SUT. During the copy of S1, write
Wr1 changes the object on the reference server. At the completion of
the copy of S1, the objects are again out of synchronization. Wr1’ is
the write constructed from the buffered version of Wr1 and replayed on
the SUT.
synchronized and all subsequent requests referencing it
are eligible for duplication and comparison.
Even after initial synchronization, concurrent and overlapping
updates (e.g., Wr1 and Wr2 in Figure 4) can
cause a file object to become unsynchronized. Two requests
are deemed overlapping if they both affect the
same state. Two requests are deemed concurrent if the
second one arrives at the relay before the first one’s response.
This definition of concurrency accounts for both
network reordering and server reordering. Since the Tee
has no reliable way to determine the order in which concurrent
requests are executed on the reference server, any
state affected by both Wr1 and Wr2 is indeterminate.
Resynchronizing the object requires re-copying the affected
state from the reference server to the SUT. Since
overlapping concurrency is rare, our Tee simply marks
the object unsynchronized and repeats the process entirely.
The remainder of this section provides details regarding
synchronization of files and directories, and describes
some synchronization ordering enhancements that allow
comparisons to start more quickly.
Regular file synchronization: A regular file’s state is
its data and its attributes. Synchronizing a regular file
takes place in three steps. First, a small unit of data and
the file’s attributes are read from the reference server and
written to the SUT. If a client RPC affects the object during
this initial step, the step is repeated. This establishes
a point in time for beginning the changeset. Second, the
remaining data is copied. Third, any changeset entries
are replayed.
A file’s changeset is a list of attribute changes and
written-to extents. A bounded amount of the written data
Links

RAID data recovery, Mac data recovery, Unix data recovery, Linux data recovery, Oracle data recovery, CD data recovery, Zip data recovery, DVD data recovery , Flash data recovery, Laptop data recovery, PDA data recovery, Ipaq data recovery, Maxtor HDD, Hitachi HDD, Fujitsi HDD, Seagate HDD, Hewlett-Packard HDD, HP HDD, IBM HDD, MP3 data recovery, DVD data recovery, CD-RW data recovery, DAT data recovery, Smartmedia data recovery, Network data recovery, Lost data recovery, Back-up expert data recovery, Tape data recovery, NTFS data recovery, FAT 16 data recovery, FAT 32 data recovery, Novell data recovery, Recovery tool data recovery, Compact flash data recovery, Hard drive data recovery, IDE data recovery, SCSI data recovery, Deskstar data recovery, Maxtor data recovery, Fujitsu HDD data recovery, Samsung data recovery, IBM data recovery, Seagate data recovery, Hitachi data recovery, Western Digital data recovery, Quantum data recovery, Microdrives data recovery, Easy Recovery, Recover deleted data , Data Recovery, Data Recovery Software, Undelete data, Recover, Recovery, Restore data, Unerase deleted data, unformat, Deleted, Data Destorer, fat recovery, Data, Recovery Software, File recovery, Drive Recovery, Recovery Disk , Easy data recovery, Partition recovery, Data Recovery Program, File Recovery, Disaster Recovery, Undelete File, Hard Disk Rrecovery, Win95 Data Recovery, Win98 Data Recovery, WinME data recovery, WinNT 4.x data recovery, WinXP data recovery, Windows2000 data recovery, System Utilities data recovery, File data recovery, Disk Management recovery, BitMart 2000 data recovery, Hard Drive Data Recovery, CompactFlash I, CompactFlash II, CF Compact Flash Type I Card,CF Compact Flash Type II Card, MD Micro Drive Card, XD Picture Card, SM Smart Media Card, MMC I Multi Media Type I Card, MMC II Multi Media Type II Card, RS-MMC Reduced Size Multi Media Card, SD Secure Digital Card, Mini SD Mini Secure Digital Card, TFlash T-Flash Card, MS Memory Stick Card, MS DUO Memory Stick Duo Card, MS PRO Memory Stick PRO Card, MS PRO DUO Memory Stick PRO Duo Card, MS Memory Stick Card MagicGate, MS DUO Memory Stick Duo Card MagicGate, MS PRO Memory Stick PRO Card MagicGate, MS PRO DUO Memory Stick PRO Duo Card MagicGate, MicroDrive Card and TFlash Memory Cards, Digital Camera Memory Card, RS-MMC, ATAPI Drive, JVC JY-HD10U, Secured Data Deletion, IT Security Firewall & Antiviruses, PocketPC Recocery, System File Recovery , RAID