Outstanding Performance with High Scalability
FEFS can achieve scalable I/O performance (~1TB/s) & capacity (~8EB) with multiple servers & storage. Its effective I/O usage management offers QoS by utilising all available I/O bandwidth.
Avoids out-of-service time caused by a single point of failure with redundant hardware and failover mechanism. In InfiniBand Multi-rail, all IB connections are used by round-robin order by each request.
Advanced Collaboration Environment
Supports segmentation of service threads to different node groups. Multiple teams can share the same file system with a guaranteed amount of service. It also supports per-directory usage quota, per-node fair share, etc.
Fujitsu Hardware Ready
FEFS is guaranteed to work with a wide range of Fujitsu hardware from direct-attached subsystems to fibre channel storage systems. Its client driver can also be installed on third-party clients.
FEFS is based on the community release of Lustre software, and is hardware, server, and network fabric neutral. Enterprises can scale their storage deployments horizontally, yet continue to have simple-to-manage storage.
FEFS supports per-directory quota. Multiple users can put files into the same project directory.
Flexible Storage Schemes
Users can define which drive the files go to; different isolated projects can run on separate drives for guaranteed performance.
FEFS supports fair-share configuration per node. Busy interactive node (e.g., login node) can be configured to avoid a single user obstructing others’ progress.
Single File System Namespace
FEFS is capable of providing a single namespace on top of hundreds or thousands of logical volumes; zero-downtime expansion is possible.
With multipath storage devices, servers can be configured as high-availability active/active pairs. Failure downtime is minimised.
Higher File System Limits
FEFS supports more numbers of ACL entries than its upstream implementation.
* Detail information, please refer to https://www.fujitsu.com/downloads/TC/sc11/fefs-presentation-sc11.pdf.