|
This is AccessMethods page for
ClusterGate.RU
Here we discuss various access methods to large volume of data.
|
- XrootD the Next Generation Root File Server
- dCacheThe goal of this project is to provide a system
for storing and retrieving huge amounts of data, distributed among a large
number of heterogenous server nodes, under a single virtual filesystem tree
with a variety of standard access methods. Depending on the Persistency Model,
dCache provides methods for exchanging data with backend (tertiary) Storage
Systems as well as space management, pool attraction, dataset replication, hot
spot determination and recovery from disk or node failures. Connected to a
tertiary storage system, the cache simulates unlimited direct access storage
space. Dataexchanges to and from the underlying HSM are performed
automatically and invisibly to the user. Filesystem namespace operations may
be performed through a standard nfs(2) interface.
- GA - Global Arrays. Global Arrays have
been designed to complement rather than substitute for the message-passing
programming model. The programmer is free to use both the shared-memory and
message-passing paradigms in the same program, and to take advantage of
existing message-passing software libraries. Global Arrays are compatible with
the Message Passing Interface (MPI).
- Global Arrays
The Global Arrays (GA) toolkit provides an efficient and portable
.shared-memory. programming interface for distributed-memory computers. Each
process in a MIMD parallel program can asynchronously access logical blocks of
physically distributed dense multi-dimensional arrays, without need for
explicit cooperation by other processes. Unlike other shared-memory
environments, the GA model exposes to the programmer the non-uniform memory
access (NUMA) characteristics of the high performance computers and
acknowledges that access to a remote portion of the shared data is slower than
to the local portion. The locality information for the shared data is
available, and a direct access to the local portions of shared data is
provided. More information on these can be found on the
- NBD Network Block Device (TCP version)
What is it: With this thing compiled into your kernel, Linux can use a remote
server as one of its block devices. Every time the client computer wants to
read /dev/nd0, it will send a request to the server via TCP, which will reply
with the data requested. This can be used for stations with low disk space (or
even diskless - if you boot from floppy) to borrow disk space from other
computers. Unlike NFS, it is possible to put any file system on it. But (also
unlike NFS), if someone has mounted NBD read/write, you must assure that no
one else will have it mounted.
- The Enhanced Network Block Device Linux Kernel Module
- DRBD is a block device which is designed to build
high availability clusters. This is done by mirroring a whole block device via
(a dedicated) network. You could see it as a network raid-1.
- DPM The
Disk Pool Manager (DPM) is a lightweight solution for disk storage management.
If offers the required SRM interfaces, hopefully without being complicated by
other modes of access or complications such as tape storage systems.
It has been developed at CERN
- SRM Working Group for standards on storage
resource management
- SRB The SDSC Storage Resource Broker (SRB) is
client-server middleware that provides a uniform interface for connecting to
heterogeneous data resources over a network and accessing replicated data
sets. SRB, in conjunction with the Metadata Catalog (MCAT), provides a way to
access data sets and resources based on their attributes and/or logical names
rather than their names or physical locations.
|
|