To define VMSF, we need to understand that Virtual Machine File System (VMFS) is a scalable, symmetrically clustered, high-performance virtual machine hosting (VM) file system for shared block storage.
It implements a clustered locking protocol using only storage connections and does not require network communication between nodes between hosts participating in the VMFS cluster. The VMFS file system layout and IO algorithms are optimized to ensure the raw IO bandwidth of the device for VMs.
Adjustable IO mechanism masks errors in physical fabric using contextual material information. VMFS Lock Service forms the basis of VMware clustered applications such as vMotion, Storage vMotion, Distributed Resource Scheduling, High Availability, and Fault Tolerance.
Virtual machine metadata is serialized into files, and VMFS provides a POSIX interface for cluster-safe virtual machine management operations. It also contains a pipeline data driver for initializing and moving big data.
VMFS has inspired changes to the firmware of the disk array and the SCSI protocol. These changes allow the file system to implement hardware-accelerated data transfer and a lock manager, among other things.
Types of VMFS
Many file systems exist, including disk file systems, databases, and networks. Examples of file systems you may have heard of are:
- NTFS (Windows supported), ext (e.g., ext2, ext3, and ext4), and ReiserFS (both Linux supported)
- HFS + and APFS (they are both supported by Apple’s macOS)
- UDF and ISO 9660 for optical discs
On a hard disk, the machine file system VMFS is set when the virtual disk is initialized (if new) or formatted (whether new or not).
VMware VMFS has grown significantly since the release of the first version. Here is a brief overview of VMFS versions to track significant changes and features.
VMFS 1 is used for ESX Server 1.x. This version of VMware VMFS did not support clustering functions and was used on only one server at a time. Simultaneous access from multiple servers is not supported.
VMFS 2 is used on ESX Server 2.x and sometimes on ESX 3.x. VMFS 2 did not have a directory structure.
VMFS 3 is used on ESXi Server 3.x and ESXi Server 4.x in vSphere. This version adds support for the directory structure. The maximum file system size is 50 TB. The size of the total number of logical units (LUN) is 2 TB. ESXi 7.0 does not support VMFS 3.
VMFS 5 is used starting with VMware vSphere 5.x. The volume (file system) was increased to 64 TB, and the maximum VMDK file size was increased to 62 for VMFS5. However, ESXi 5.5 supports the entire length of 2 TB VMDK virtual disks. GPT partition layout support added. Both GPT and MBR are supported (previous versions of VMFS only help MBR).
VMFS 6 was released in vSphere 6.5 and is used in vSphere 6.7, vSphere 7.0, and newer versions such as vSphere 7.0 Update 3.
Virtualized storage is developed to give virtual machines (VMs) the space they need to host their operating systems and applications. In virtualized circumstances, the virtual disks of the virtual machine are stored in a “VMFS datastore”.
Virtualized storage devices provide multiple ESXi hosts, as the storage device aids the VMFS datastores of the virtual machine files. VMFS datastore also grants VMFS volumes as automatic space reclamation of the virtual disk file system is located on the VMware ESXi host.
In the virtualization process, a physical disk partition is created to form a space known as a “Logic Unit Number” or LUN; one or more LUNs make up a “volume”; several volumes make up a data store. The data store usually needs a file system installed to be accessible.
With VMware’s cloud computing platform, vSphere, data warehouses are configured with one of two file system formats:
- Virtual Machine File System (VMFS)
- Network File System (NFS)
VMware vSphere also supports the creation of data repositories, including shared NFS file system repositories with thin provisioning. NFS datastores initiate a virtual compatibility mode to block storage devices from unwanted storage utilization.
NFS datastore and NFS storage work cohesively to grant enough disk space for the physical storage space.
Here are the top features of the virtual machine file system (VMFS):
- VMware VMFS block size
- File fragmentation
- Datastore extents
- Journal logging
- Directory structure
- Thin provisioning
- Free space reclamation
- Snapshots and sparse virtual disks
- RAW Device Mapping
RAW Device Mapping
Integrating Raw Device Mapping (RDM) disks into the VMware VMFS structure gives you more flexibility when working with VMs storage. There are two modes of RDM compatibility in VMware vSphere.
RDM disks in virtual compatibility mode
VMDK mapping file is created on VMFS data warehouse (* -rdmp.vmdk) to map physical LUN to the virtual machine storage array. There are some features for mapping physical VM storage with this method.
RDM disks in physical compatibility mode
The ESXi host creates a VMFS repository mapping file. Still, the SCSI commands are processed directly on the LUN device, bypassing the hypervisor virtualization layer (except for the LUN Report command). It is a more minor virtualized type of disk. VMware downloads are not supported.
With VMFS, new virtual machines can be assembled without the supervision of a storage administrator. The size of the volume can be changed without disturbing the network operations.
Multiple VMware ESX Server installations can simultaneously write and read data to and from a single storage location. VMware ESX servers are able to be added or removed from the VMFS volume without affecting other hosts.
The file and block size can be adjusted to optimize each virtual machine’s I/O (input/output) functionality. In case of a server failure, the distributed file system enables rapid system recovery and prevents catastrophic data loss.
VMware VMFS (Virtual Machine File System) is a cluster file system that facilitates the virtualization of storage for multiple VMware ESX server installations.
This hypervisor divides physical servers into multiple virtual machines. VMFS is part of a virtualization package called VMware Infrastructure 3.