Ext2ext2, or second extended file system, is a for the Linux kernel. It was initially designed by French software developer Rémy Card as a replacement for the (ext). Having been designed according to the same principles as the from BSD, it was the first commercial-grade filesystem for Linux. The canonical implementation of ext2 is the "ext2fs" filesystem driver in the Linux kernel. Other implementations (of varying quality and completeness) exist in GNU Hurd, MINIX 3, some BSD kernels, in MiNT, Haiku and as third-party Microsoft Windows and macOS drivers.
File systemIn computing, a file system or filesystem (often abbreviated to fs) is a method and data structure that the operating system uses to control how data is stored and retrieved. Without a file system, data placed in a storage medium would be one large body of data with no way to tell where one piece of data stopped and the next began, or where any piece of data was located when it was time to retrieve it. By separating the data into pieces and giving each piece a name, the data are easily isolated and identified.
InodeThe inode (index node) is a data structure in a that describes a object such as a or a directory. Each inode stores the attributes and disk block locations of the object's data. File-system object attributes may include metadata (times of last change, access, modification), as well as owner and data. A directory is a list of inodes with their assigned names. The list includes an entry for itself, its parent, and each of its children. There has been uncertainty on the Linux kernel mailing list about the reason for the "i" in "inode".
Ext3ext3, or third extended filesystem, is a that is commonly used by the Linux kernel. It used to be the default for many popular Linux distributions. Stephen Tweedie first revealed that he was working on extending ext2 in Journaling the Linux ext2fs Filesystem in a 1998 paper, and later in a February 1999 kernel mailing list posting. The filesystem was merged with the mainline Linux kernel in November 2001 from 2.4.15 onward. Its main advantage over ext2 is , which improves reliability and eliminates the need to check the file system after an unclean shutdown.
Block (data storage)In computing (specifically data transmission and data storage), a block, sometimes called a physical record, is a sequence of bytes or bits, usually containing some whole number of records, having a maximum length; a block size. Data thus structured are said to be blocked. The process of putting data into blocks is called blocking, while deblocking is the process of extracting data from blocks. Blocked data is normally stored in a data buffer, and read or written a whole block at a time.
BtrfsBtrfs (pronounced as "better F S", "butter F S", "b-tree F S", or simply by spelling it out) is a computer storage format that combines a based on the copy-on-write (COW) principle with a logical volume manager (not to be confused with Linux's LVM), developed together. It was founded by Chris Mason in 2007 for use in Linux, and since November 2013, the file system's on-disk format has been declared stable in the Linux kernel. Btrfs is intended to address the lack of pooling, snapshots, checksums, and integral multi-device spanning in .
Virtual file systemA virtual file system (VFS) or virtual filesystem switch is an abstract layer on top of a more concrete . The purpose of a VFS is to allow client applications to access different types of concrete file systems in a uniform way. A VFS can, for example, be used to access local and network storage devices transparently without the client application noticing the difference. It can be used to bridge the differences in Windows, classic Mac OS/macOS and Unix filesystems, so that applications can access files on local file systems of those types without having to know what type of file system they are accessing.
Pipeline (Unix)In Unix-like computer operating systems, a pipeline is a mechanism for inter-process communication using message passing. A pipeline is a set of processes chained together by their standard streams, so that the output text of each process (stdout) is passed directly as input (stdin) to the next one. The second process is started as the first process is still executing, and they are executed concurrently. The concept of pipelines was championed by Douglas McIlroy at Unix's ancestral home of Bell Labs, during the development of Unix, shaping its toolbox philosophy.