5. File Systems and Disk Management
Journaling file systems reduce the time needed to recover a file system that was not unmounted properly. While this can be extremely important in reducing downtime for servers, it has also become popular for desktop environments. This chapter contains other journaling file systems you can use instead of the default LFS extended file system (ext2/3/4). It also provides introductory material on managing disk arrays.
5.1 About initramfs
The only purpose of an initramfs is to mount the root filesystem. The initramfs is a complete set of directories that you would find on a normal root filesystem. It is bundled into a single cpio archive and compressed with one of several compression algorithms.
At boot time, the boot loader loads the kernel and the initramfs image into memory and starts the kernel. The kernel checks for the presence of the initramfs and, if found, mounts it as / and runs /init. The init program is typically a shell script. Note that the boot process takes longer, possibly significantly longer, if an initramfs is used.
For most distributions, kernel modules are the biggest reason to have an initramfs. In a general distribution, there are many unknowns such as file system types and disk layouts. In a way, this is the opposite of LFS where the system capabilities and layout are known and a custom kernel is normally built. In this situation, an initramfs is rarely needed.
There are only four primary reasons to have an initramfs in the LFS environment: loading the rootfs from a network, loading it from an LVM logical volume, having an encrypted rootfs where a password is required, or for the convenience of specifying the rootfs as a LABEL or UUID. Anything else usually means that the kernel was not configured properly.
Building an initramfs
If you do decide to build an initramfs, the following scripts will provide a basis to do it. The scripts will allow specifying a rootfs via partition UUID or partition LABEL or a rootfs on an LVM logical volume. They do not support an encrypted root file system or mounting the rootfs over a network card. For a more complete capability see the LFS Hints or dracut.
To install these scripts, run the following commands as the root user:
cat > /usr/sbin/mkinitramfs << "EOF"
#!/bin/bash
# This file based in part on the mkinitramfs script for the LFS LiveCD
# written by Alexander E. Patrakov and Jeremy Huntwork.
copy()
{
local file
if [ "$2" = "lib" ]; then
file=$(PATH=/usr/lib type -p $1)
else
file=$(type -p $1)
fi
if [ -n "$file" ] ; then
cp $file $WDIR/usr/$2
else
echo "Missing required file: $1 for directory $2"
rm -rf $WDIR
exit 1
fi
}
if [ -z $1 ] ; then
INITRAMFS_FILE=initrd.img-no-kmods
else
KERNEL_VERSION=$1
INITRAMFS_FILE=initrd.img-$KERNEL_VERSION
fi
if [ -n "$KERNEL_VERSION" ] && [ ! -d "/usr/lib/modules/$1" ] ; then
echo "No modules directory named $1"
exit 1
fi
printf "Creating $INITRAMFS_FILE... "
binfiles="sh cat cp dd killall ls mkdir mknod mount "
binfiles="$binfiles umount sed sleep ln rm uname"
binfiles="$binfiles readlink basename"
# Systemd installs udevadm in /bin. Other udev implementations have it in /sbin
if [ -x /usr/bin/udevadm ] ; then binfiles="$binfiles udevadm"; fi
sbinfiles="modprobe blkid switch_root"
# Optional files and locations
for f in mdadm mdmon udevd udevadm; do
if [ -x /usr/sbin/$f ] ; then sbinfiles="$sbinfiles $f"; fi
done
# Add lvm if present (cannot be done with the others because it
# also needs dmsetup
if [ -x /usr/sbin/lvm ] ; then sbinfiles="$sbinfiles lvm dmsetup"; fi
unsorted=$(mktemp /tmp/unsorted.XXXXXXXXXX)
DATADIR=/usr/share/mkinitramfs
INITIN=init.in
# Create a temporary working directory
WDIR=$(mktemp -d /tmp/initrd-work.XXXXXXXXXX)
# Create base directory structure
mkdir -p $WDIR/{dev,run,sys,proc,usr/{bin,lib/{firmware,modules},sbin}}
mkdir -p $WDIR/etc/{modprobe.d,udev/rules.d}
touch $WDIR/etc/modprobe.d/modprobe.conf
ln -s usr/bin $WDIR/bin
ln -s usr/lib $WDIR/lib
ln -s usr/sbin $WDIR/sbin
ln -s lib $WDIR/lib64
# Create necessary device nodes
mknod -m 640 $WDIR/dev/console c 5 1
mknod -m 664 $WDIR/dev/null c 1 3
# Install the udev configuration files
if [ -f /etc/udev/udev.conf ]; then
cp /etc/udev/udev.conf $WDIR/etc/udev/udev.conf
fi
for file in $(find /etc/udev/rules.d/ -type f) ; do
cp $file $WDIR/etc/udev/rules.d
done
# Install any firmware present
cp -a /usr/lib/firmware $WDIR/usr/lib
# Copy the RAID configuration file if present
if [ -f /etc/mdadm.conf ] ; then
cp /etc/mdadm.conf $WDIR/etc
fi
# Install the init file
install -m0755 $DATADIR/$INITIN $WDIR/init
if [ -n "$KERNEL_VERSION" ] ; then
if [ -x /usr/bin/kmod ] ; then
binfiles="$binfiles kmod"
else
binfiles="$binfiles lsmod"
sbinfiles="$sbinfiles insmod"
fi
fi
# Install basic binaries
for f in $binfiles ; do
ldd /usr/bin/$f | sed "s/\t//" | cut -d " " -f1 >> $unsorted
copy /usr/bin/$f bin
done
for f in $sbinfiles ; do
ldd /usr/sbin/$f | sed "s/\t//" | cut -d " " -f1 >> $unsorted
copy $f sbin
done
# Add udevd libraries if not in /usr/sbin
if [ -x /usr/lib/udev/udevd ] ; then
ldd /usr/lib/udev/udevd | sed "s/\t//" | cut -d " " -f1 >> $unsorted
elif [ -x /usr/lib/systemd/systemd-udevd ] ; then
ldd /usr/lib/systemd/systemd-udevd | sed "s/\t//" | cut -d " " -f1 >> $unsorted
fi
# Add module symlinks if appropriate
if [ -n "$KERNEL_VERSION" ] && [ -x /usr/bin/kmod ] ; then
ln -s kmod $WDIR/usr/bin/lsmod
ln -s kmod $WDIR/usr/bin/insmod
fi
# Add lvm symlinks if appropriate
# Also copy the lvm.conf file
if [ -x /usr/sbin/lvm ] ; then
ln -s lvm $WDIR/usr/sbin/lvchange
ln -s lvm $WDIR/usr/sbin/lvrename
ln -s lvm $WDIR/usr/sbin/lvextend
ln -s lvm $WDIR/usr/sbin/lvcreate
ln -s lvm $WDIR/usr/sbin/lvdisplay
ln -s lvm $WDIR/usr/sbin/lvscan
ln -s lvm $WDIR/usr/sbin/pvchange
ln -s lvm $WDIR/usr/sbin/pvck
ln -s lvm $WDIR/usr/sbin/pvcreate
ln -s lvm $WDIR/usr/sbin/pvdisplay
ln -s lvm $WDIR/usr/sbin/pvscan
ln -s lvm $WDIR/usr/sbin/vgchange
ln -s lvm $WDIR/usr/sbin/vgcreate
ln -s lvm $WDIR/usr/sbin/vgscan
ln -s lvm $WDIR/usr/sbin/vgrename
ln -s lvm $WDIR/usr/sbin/vgck
# Conf file(s)
cp -a /etc/lvm $WDIR/etc
fi
# Install libraries
sort $unsorted | uniq | while read library ; do
# linux-vdso and linux-gate are pseudo libraries and do not correspond to a file
# libsystemd-shared is in /lib/systemd, so it is not found by copy, and
# it is copied below anyway
if [[ "$library" == linux-vdso.so.1 ]] ||
[[ "$library" == linux-gate.so.1 ]] ||
[[ "$library" == libsystemd-shared* ]]; then
continue
fi
copy $library lib
done
if [ -d /usr/lib/udev ]; then
cp -a /usr/lib/udev $WDIR/usr/lib
fi
if [ -d /usr/lib/systemd ]; then
cp -a /usr/lib/systemd $WDIR/usr/lib
fi
if [ -d /usr/lib/elogind ]; then
cp -a /usr/lib/elogind $WDIR/usr/lib
fi
# Install the kernel modules if requested
if [ -n "$KERNEL_VERSION" ]; then
find \
/usr/lib/modules/$KERNEL_VERSION/kernel/{crypto,fs,lib} \
/usr/lib/modules/$KERNEL_VERSION/kernel/drivers/{block,ata,nvme,md,firewire} \
/usr/lib/modules/$KERNEL_VERSION/kernel/drivers/{scsi,message,pcmcia,virtio} \
/usr/lib/modules/$KERNEL_VERSION/kernel/drivers/usb/{host,storage} \
-type f 2> /dev/null | cpio --make-directories -p --quiet $WDIR
cp /usr/lib/modules/$KERNEL_VERSION/modules.{builtin,order} \
$WDIR/usr/lib/modules/$KERNEL_VERSION
if [ -f /usr/lib/modules/$KERNEL_VERSION/modules.builtin.modinfo ]; then
cp /usr/lib/modules/$KERNEL_VERSION/modules.builtin.modinfo \
$WDIR/usr/lib/modules/$KERNEL_VERSION
fi
depmod -b $WDIR $KERNEL_VERSION
fi
( cd $WDIR ; find . | cpio -o -H newc --quiet | gzip -9 ) > $INITRAMFS_FILE
# Prepare early loading of microcode if available
if ls /usr/lib/firmware/intel-ucode/* >/dev/null 2>&1 ||
ls /usr/lib/firmware/amd-ucode/* >/dev/null 2>&1; then
# first empty WDIR to reuse it
rm -r $WDIR/*
DSTDIR=$WDIR/kernel/x86/microcode
mkdir -p $DSTDIR
if [ -d /usr/lib/firmware/amd-ucode ]; then
cat /usr/lib/firmware/amd-ucode/microcode_amd*.bin > $DSTDIR/AuthenticAMD.bin
fi
if [ -d /usr/lib/firmware/intel-ucode ]; then
cat /usr/lib/firmware/intel-ucode/* > $DSTDIR/GenuineIntel.bin
fi
( cd $WDIR; find . | cpio -o -H newc --quiet ) > microcode.img
cat microcode.img $INITRAMFS_FILE > tmpfile
mv tmpfile $INITRAMFS_FILE
rm microcode.img
fi
# Remove the temporary directories and files
rm -rf $WDIR $unsorted
printf "done.\n"
EOF
chmod 0755 /usr/sbin/mkinitramfs
mkdir -p /usr/share/mkinitramfs &&
cat > /usr/share/mkinitramfs/init.in << "EOF"
#!/bin/sh
PATH=/usr/bin:/usr/sbin
export PATH
problem()
{
printf "Encountered a problem!\n\nDropping you to a shell.\n\n"
sh
}
no_device()
{
printf "The device %s, which is supposed to contain the\n" $1
printf "root file system, does not exist.\n"
printf "Please fix this problem and exit this shell.\n\n"
}
no_mount()
{
printf "Could not mount device %s\n" $1
printf "Sleeping forever. Please reboot and fix the kernel command line.\n\n"
printf "Maybe the device is formatted with an unsupported file system?\n\n"
printf "Or maybe filesystem type autodetection went wrong, in which case\n"
printf "you should add the rootfstype=... parameter to the kernel command line.\n\n"
printf "Available partitions:\n"
}
do_mount_root()
{
mkdir /.root
[ -n "$rootflags" ] && rootflags="$rootflags,"
rootflags="$rootflags$ro"
case "$root" in
/dev/* ) device=$root ;;
UUID=* ) eval $root; device="/dev/disk/by-uuid/$UUID" ;;
PARTUUID=*) eval $root; device="/dev/disk/by-partuuid/$PARTUUID" ;;
LABEL=* ) eval $root; device="/dev/disk/by-label/$LABEL" ;;
"" ) echo "No root device specified." ; problem ;;
esac
while [ ! -b "$device" ] ; do
no_device $device
problem
done
if ! mount -n -t "$rootfstype" -o "$rootflags" "$device" /.root ; then
no_mount $device
cat /proc/partitions
while true ; do sleep 10000 ; done
else
echo "Successfully mounted device $root"
fi
}
do_try_resume()
{
case "$resume" in
UUID=* ) eval $resume; resume="/dev/disk/by-uuid/$UUID" ;;
LABEL=*) eval $resume; resume="/dev/disk/by-label/$LABEL" ;;
esac
if $noresume || ! [ -b "$resume" ]; then return; fi
ls -lH "$resume" | ( read x x x x maj min x
echo -n ${maj%,}:$min > /sys/power/resume )
}
init=/sbin/init
root=
rootdelay=
rootfstype=auto
ro="ro"
rootflags=
device=
resume=
noresume=false
mount -n -t devtmpfs devtmpfs /dev
mount -n -t proc proc /proc
mount -n -t sysfs sysfs /sys
mount -n -t tmpfs tmpfs /run
read -r cmdline < /proc/cmdline
for param in $cmdline ; do
case $param in
init=* ) init=${param#init=} ;;
root=* ) root=${param#root=} ;;
rootdelay=* ) rootdelay=${param#rootdelay=} ;;
rootfstype=*) rootfstype=${param#rootfstype=} ;;
rootflags=* ) rootflags=${param#rootflags=} ;;
resume=* ) resume=${param#resume=} ;;
noresume ) noresume=true ;;
ro ) ro="ro" ;;
rw ) ro="rw" ;;
esac
done
# udevd location depends on version
if [ -x /sbin/udevd ]; then
UDEVD=/sbin/udevd
elif [ -x /lib/udev/udevd ]; then
UDEVD=/lib/udev/udevd
elif [ -x /lib/systemd/systemd-udevd ]; then
UDEVD=/lib/systemd/systemd-udevd
else
echo "Cannot find udevd nor systemd-udevd"
problem
fi
${UDEVD} --daemon --resolve-names=never
udevadm trigger
udevadm settle
if [ -f /etc/mdadm.conf ] ; then mdadm -As ; fi
if [ -x /sbin/vgchange ] ; then /sbin/vgchange -a y > /dev/null ; fi
if [ -n "$rootdelay" ] ; then sleep "$rootdelay" ; fi
do_try_resume # This function will not return if resuming from disk
do_mount_root
killall -w ${UDEVD##*/}
exec switch_root /.root "$init" "$@"
EOF
Using an initramfs
Required Runtime Dependency
Other Runtime Dependencies
LVM2-2.03.18 and/or mdadm-4.2 must be installed before generating the initramfs, if the system partition uses them.
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/initramfs
To build an initramfs, run the following as the root user:
mkinitramfs [KERNEL VERSION]
The optional argument is the directory where the appropriate kernel modules are located. This must be a subdirectory of /lib/modules. If no modules are specified, then the initramfs is named initrd.img-no-kmods. If a kernel version is specified, the initrd is named initrd.img-$KERNEL_VERSION and is only appropriate for the specific kernel specified. The output file will be placed in the current directory.
If early loading of microcode is needed (see the section called “Microcode updates for CPUs”), you can install the appropriate blob or container in /lib/firmware. It will be automatically added to the initrd when running mkinitramfs.
After generating the initrd, copy it to the /boot directory.
Now edit /boot/grub/grub.cfg and add a new menuentry. Below are several examples.
# Generic initramfs and root fs identified by UUID
menuentry "LFS Dev (LFS-7.0-Feb14) initrd, Linux 3.0.4"
{
linux /vmlinuz-3.0.4-lfs-20120214 root=UUID=54b934a9-302d-415e-ac11-4988408eb0a8 ro
initrd /initrd.img-no-kmods
}
# Generic initramfs and root fs on LVM partition
menuentry "LFS Dev (LFS-7.0-Feb18) initrd lvm, Linux 3.0.4"
{
linux /vmlinuz-3.0.4-lfs-20120218 root=/dev/mapper/myroot ro
initrd /initrd.img-no-kmods
}
# Specific initramfs and root fs identified by LABEL
menuentry "LFS Dev (LFS-7.1-Feb20) initrd label, Linux 3.2.6"
{
linux /vmlinuz-3.2.6-lfs71-120220 root=LABEL=lfs71 ro
initrd /initrd.img-3.2.6-lfs71-120220
}
Finally, reboot the system and select the desired system.
5.2 btrfs-progs-6.1.3
Introduction to btrfs-progs
The btrfs-progs package contains administration and debugging tools for the B-tree file system (btrfs).
This package is known to build and work properly using an LFS 11.3 platform.
Package Information
-
Download (HTTP): https://www.kernel.org/pub/linux/kernel/people/kdave/btrfs-progs/btrfs-progs-v6.1.3.tar.xz
-
Download MD5 sum: d5f703b4085dc745003c16d046d32c2b
-
Download size: 2.2 MB
-
Estimated disk space required: 53 MB (add 8.2 GB for tests)
-
Estimated build time: 0.2 SBU (add 5.0 SBU for tests, but will be longer on slow disks)
Btrfs-progs Dependencies
Required
Recommended
asciidoc-10.2.0 (or asciidoctor-2.0.18) and xmlto-0.0.28 (both required to generate man pages)
Optional
LVM2-2.03.18 (dmsetup is used in tests), reiserfsprogs-3.6.27 (for tests), and sphinx-6.1.3 (required to build documentation),
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/btrfs-progs
Kernel Configuration
Enable the following option in the kernel configuration and recompile the kernel:
File systems --->
<*/M> Btrfs filesystem support [CONFIG_BTRFS_FS]
In addition to the above and to the options required for LVM2-2.03.18 and reiserfsprogs-3.6.27, the following options must be enabled for running tests:
File systems --->
[*] Btrfs POSIX Access Control Lists [CONFIG_BTRFS_FS_POSIX_ACL]
[*] ReiserFS extended attributes [CONFIG_REISERFS_FS_XATTR]
[*] ReiserFS POSIX Access Control Lists [CONFIG_REISERFS_FS_POSIX_ACL]
Installation of btrfs-progs
Install btrfs-progs by running the following commands:
./configure --prefix=/usr --disable-documentation &&
make
Note
Some tests require grep built with perl regular expressions. To obtain this, rebuild grep with the LFS Chapter 8 instructions after installing pcre2-10.42.
Before running tests, build a support program:
make fssum
To test the results, issue (as the root user):
pushd tests
./fsck-tests.sh
./mkfs-tests.sh
./cli-tests.sh
./convert-tests.sh
./misc-tests.sh
./fuzz-tests.sh
popd
Note
If the above mentioned kernel options are not enabled, some tests fail, and prevent all the remaining tests to run because the test disk image is not cleanly unmounted.
The mkfs test 025-zoned-parallel is known to fail.
Install the package as the root user:
make install
Command Explanations
--disable-documentation: This option is needed if the recommended dependencies are not installed.
Contents
Installed Programs: btrfs, btrfs-convert, btrfs-find-root, btrfs-image, btrfs-map-logical, btrfs-select-super, btrfsck (link to btrfs), btrfstune, fsck.btrfs, and mkfs.btrfs
Installed Libraries: libbtrfs.so and libbtrfsutil.so
Installed Directories: /usr/include/btrfs
Short Descriptions
btrfs is the main interface into btrfs filesystem operations
btrfs-convert converts from an ext2/3/4 or reiserfs filesystem to btrfs
btrfs-find-root is a filter to find btrfs root
btrfs-map-logical maps btrfs logical extent to physical extent
btrfs-select-super overwrites the primary superblock with a backup copy
btrfstune tunes various filesystem parameters
fsck.btrfs does nothing, but is present for consistency with fstab
mkfs.btrfs creates a btrfs file system.
5.3 dosfstools-4.2
Introduction to dosfstools
The dosfstools package contains various utilities for use with the FAT family of file systems.
This package is known to build and work properly using an LFS 11.3 platform.
Package Information
-
Download (HTTP): https://github.com/dosfstools/dosfstools/releases/download/v4.2/dosfstools-4.2.tar.gz
-
Download MD5 sum: 49c8e457327dc61efab5b115a27b087a
-
Download size: 314 KB
-
Estimated disk space required: 3.5 MB
-
Estimated build time: less than 0.1 SBU
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/dosfstools
Kernel Configuration
Enable the following option in the kernel configuration and recompile the kernel:
File systems --->
<DOS/FAT/EXFAT/NT Filesystems --->
<*/M> MSDOS fs support [CONFIG_MSDOS_FS]
<*/M> VFAT (Windows-95) fs support [CONFIG_VFAT_FS]
Installation of dosfstools
Install dosfstools by running the following commands:
./configure --prefix=/usr \
--enable-compat-symlinks \
--mandir=/usr/share/man \
--docdir=/usr/share/doc/dosfstools-4.2 &&
make
This package does not come with a test suite.
Now, as the root user:
make install
Command Explanations
--enable-compat-symlinks: This switch creates the dosfsck, dosfslabel, fsck.msdos, fsck.vfat, mkdosfs, mkfs.msdos, and mkfs.vfat symlinks required by some programs.
Contents
Installed Programs: fatlabel, fsck.fat, and mkfs.fat
Short Descriptions
fatlabel sets or gets a MS-DOS filesystem label from a given device
fsck.fat checks and repairs MS-DOS filesystems
mkfs.fat creates an MS-DOS filesystem under Linux.
5.4 Fuse-3.13.1
Introduction to Fuse
FUSE (Filesystem in Userspace) is a simple interface for userspace programs to export a virtual filesystem to the Linux kernel. Fuse also aims to provide a secure method for non privileged users to create and mount their own filesystem implementations.
This package is known to build and work properly using an LFS 11.3 platform.
Package Information
-
Download (HTTP): https://github.com/libfuse/libfuse/releases/download/fuse-3.13.1/fuse-3.13.1.tar.xz
-
Download MD5 sum: f2830b775bcba2ab9cb94f2619c077a4
-
Download size: 3.9 MB
-
Estimated disk space required: 102 MB (with tests and documentation)
-
Estimated build time: 0.1 SBU (add 0.4 SBU for tests)
Fuse Dependencies
Optional
Doxygen-1.9.6 (to rebuild the API documentation) and pytest-7.2.1 (required for tests)
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/fuse
Kernel Configuration
Enable the following options in the kernel configuration and recompile the kernel if necessary:
File systems --->
<*/M> FUSE (Filesystem in Userspace) support [CONFIG_FUSE_FS]
Character devices in userspace should be enabled too for running the tests:
File systems --->
<*/M> FUSE (Filesystem in Userspace) support [CONFIG_FUSE_FS]
<*/M> Character device in Userspace support [CONFIG_CUSE]
Installation of Fuse
Install Fuse by running the following commands:
sed -i '/^udev/,$ s/^/#/' util/meson.build &&
mkdir build &&
cd build &&
meson --prefix=/usr --buildtype=release .. &&
ninja
The API documentation is included in the package, but if you have Doxygen-1.9.6 installed and wish to rebuild it, issue:
pushd .. &&
doxygen doc/Doxyfile &&
popd
To test the results, run (as the root user):
python3 -m pytest test/
The pytest-7.2.1 Python module is required for the tests. One test named test_cuse will fail if the CONFIG_CUSE configuration item was not enabled when the kernel was built. Two tests, test_ctests.py and test_examples.py will produce a warning because a deprecated Python module is used.
Now, as the root user:
ninja install &&
chmod u+s /usr/bin/fusermount3 &&
cd .. &&
install -v -m755 -d /usr/share/doc/fuse-3.13.1 &&
install -v -m644 doc/{README.NFS,kernel.txt} \
/usr/share/doc/fuse-3.13.1 &&
cp -Rv doc/html /usr/share/doc/fuse-3.13.1
Command Explanations
sed … util/meson.build: This command disables the installation of a boot script and udev rule that are not needed.
--buildtype=release: Specify a buildtype suitable for stable releases of the package, as the default may produce unoptimized binaries.
Configuring fuse
Config Files
Some options regarding mount policy can be set in the file /etc/fuse.conf. To install the file run the following command as the root user:
cat > /etc/fuse.conf << "EOF"
# Set the maximum number of FUSE mounts allowed to non-root users.
# The default is 1000.
#
#mount_max = 1000
# Allow non-root users to specify the 'allow_other' or 'allow_root'
# mount options.
#
#user_allow_other
EOF
Additional information about the meaning of the configuration options are found in the man page.
Contents
Installed Programs: fusermount3 and mount.fuse3
Installed Libraries: libfuse3.so
Installed Directory: /usr/include/fuse3 and /usr/share/doc/fuse-3.13.1
Short Descriptions
fusermount3 is a suid root program to mount and unmount Fuse filesystems
mount.fuse3 is the command mount calls to mount a Fuse filesystem
libfuse3.so contains the FUSE API functions.
5.5 jfsutils-1.1.15
Introduction to jfsutils
The jfsutils package contains administration and debugging tools for the jfs file system.
This package is known to build and work properly using an LFS 11.3 platform.
Package Information
-
Download (HTTP): https://jfs.sourceforge.net/project/pub/jfsutils-1.1.15.tar.gz
-
Download MD5 sum: 8809465cd48a202895bc2a12e1923b5d
-
Download size: 532 KB
-
Estimated disk space required: 8.9 MB
-
Estimated build time: 0.1 SBU
Additional Downloads
- Required patch to fix issues exposed by GCC 10 and later: https://www.linuxfromscratch.org/patches/blfs/11.3/jfsutils-1.1.15-gcc10_fix-1.patch
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/jfs
Kernel Configuration
Enable the following option in the kernel configuration and recompile the kernel:
File systems --->
<*/M> JFS filesystem support [CONFIG_JFS_FS]
Installation of jfsutils
First, fix some issues exposed by GCC 10 and later:
patch -Np1 -i ../jfsutils-1.1.15-gcc10_fix-1.patch
Install jfsutils by running the following commands:
sed -i "/unistd.h/a#include <sys/types.h>" fscklog/extract.c &&
sed -i "/ioctl.h/a#include <sys/sysmacros.h>" libfs/devices.c &&
./configure &&
make
This package does not come with a test suite.
Now, as the root user:
make install
Command Explanations
sed …: Fixes building with glibc 2.28.
Contents
Installed Programs: fsck.jfs, jfs_debugfs, jfs_fsck, jfs_fscklog, jfs_logdump, jfs_mkfs, jfs_tune, mkfs.jfs
Installed Libraries: None
Installed Directories: None
Short Descriptions
fsck.jfs is used to replay the JFS transaction log, check a JFS formatted device for errors, and fix any errors found
jfs_fsck is a hard link to fsck.jfs
mkfs.jfs constructs an JFS file system
jfs_mkfs is a hard link to mkfs.jfs
jfs_debugfs is a program which can be used to perform various low-level actions on a JFS formatted device
jfs_fscklog extracts a JFS fsck service log into a file and/or formats and displays the extracted file
jfs_logdump dumps the contents of the journal log from the specified JFS formatted device into output file ./jfslog.dmp
jfs_tune adjusts tunable file system parameters on JFS file systems.
5.6 LVM2-2.03.18
Introduction to LVM2
The LVM2 package is a set of tools that manage logical partitions. It allows spanning of file systems across multiple physical disks and disk partitions and provides for dynamic growing or shrinking of logical partitions, mirroring and low storage footprint snapshots.
This package is known to build and work properly using an LFS 11.3 platform.
Package Information
-
Download (HTTP): https://sourceware.org/ftp/lvm2/LVM2.2.03.18.tgz
-
Download (FTP): ftp://sourceware.org/pub/lvm2/LVM2.2.03.18.tgz
-
Download MD5 sum: cda7b89ae45ddb4a0cee768645ac9757
-
Download size: 2.6 MB
-
Estimated disk space required: 48 MB (add 25 MB for tests; transient files can grow up to around 800 MB in the /tmp directory during tests)
-
Estimated build time: 0.1 SBU (using parallelism=4; add 9 to 48 SBU for tests, depending on disk speed)
LVM2 Dependencies
Required
Optional
mdadm-4.2, reiserfsprogs-3.6.27, Valgrind-3.20.0, Which-2.21, xfsprogs-6.1.1 (all five may be used, but are not required, for tests), thin-provisioning-tools, and vdo
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/lvm2
Kernel Configuration
Enable the following options in the kernel configuration and recompile the kernel:
Note
There are several other Device Mapper options in the kernel beyond those listed below. In order to get reasonable results if running the regression tests, all must be enabled either internally or as a module. The tests will all time out if Magic SysRq key is not enabled.
Device Drivers --->
[*] Multiple devices driver support (RAID and LVM) ---> [CONFIG_MD]
<*/M> Device mapper support [CONFIG_BLK_DEV_DM]
<*/M> Crypt target support [CONFIG_DM_CRYPT]
<*/M> Snapshot target [CONFIG_DM_SNAPSHOT]
<*/M> Thin provisioning target [CONFIG_DM_THIN_PROVISIONING]
<*/M> Cache target (EXPERIMENTAL) [CONFIG_DM_CACHE]
<*/M> Mirror target [CONFIG_DM_MIRROR]
<*/M> Zero target [CONFIG_DM_ZERO]
<*/M> I/O delaying target [CONFIG_DM_DELAY]
[*] Block devices --->
<*/M> RAM block device support [CONFIG_BLK_DEV_RAM]
Kernel hacking --->
Generic Kernel Debugging Instruments --->
[*] Magic SysRq key [CONFIG_MAGIC_SYSRQ]
Installation of LVM2
Install LVM2 by running the following commands:
PATH+=:/usr/sbin \
./configure --prefix=/usr \
--enable-cmdlib \
--enable-pkgconfig \
--enable-udev_sync &&
make
The tests use udev for logical volume synchronization, so the LVM udev rules and some utilities need to be installed before running the tests. If you are installing LVM2 for the first time, and do not want to install the full package before running the tests, the minimal set of utilities can be installed by running the following instructions as the root user:
make -C tools install_tools_dynamic &&
make -C udev install &&
make -C libdm install
To test the results, issue, as the root user:
LC_ALL=en_US.UTF-8 make check_local
Some tests may hang. In this case they can be skipped by adding **S=
The tests do not implement the “expected fail” possibility, and a small number of test failures is expected by upstream. More failures may happen because some kernel options are missing. For example, the lack of the dm-delay device mapper target explains some failures. Some tests may fail if there is insufficient free space available in the partition with the /tmp directory. At least one test fails if 16 TB is not available. Some tests are flagged “warned” if thin-provisioning-tools are not installed. A workaround is to add the following flags to configure:
--with-thin-check= \
--with-thin-dump= \
--with-thin-repair= \
--with-thin-restore= \
--with-cache-check= \
--with-cache-dump= \
--with-cache-repair= \
--with-cache-restore= \
Some tests may hang. They can be removed if necessary, for example: rm test/shell/lvconvert-raid-reshape.sh. The tests generate a lot of kernel messages, which may clutter your terminal. You can disable them by issuing dmesg -D before running the tests (do not forget to issue dmesg -E when tests are done).
Note
The checks create device nodes in the /tmp directory. The tests will fail if /tmp is mounted with the nodev option.
Now, as the root user:
make install
make install_systemd_units
Command Explanations
PATH+=:/usr/sbin: The path must contain /usr/sbin for proper system tool detection by the configure script. This instruction ensures that PATH is properly set even if you build as an unprivileged user.
--enable-cmdlib: This switch enables building of the shared command library. It is required when building the event daemon.
--enable-pkgconfig: This switch enables installation of pkg-config support files.
--enable-udev_sync: This switch enables synchronisation with Udev processing.
--enable-dmeventd: This switch enables building of the Device Mapper event daemon.
make install_systemd_units: This is needed to install a unit that activates logical volumes at boot. It is not installed by default.
Configuring LVM2
Config File
/etc/lvm/lvm.conf
Configuration Information
The default configuration still references the obsolete /var/lock directory. This creates a deadlock at boot time. Change this (as the root user):
sed -e '/locking_dir =/{s/#//;s/var/run/}' \
-i /etc/lvm/lvm.conf
Contents
Installed Programs: blkdeactivate, dmeventd (optional), dmsetup, fsadm, lvm, lvmdump, and lvm_import_vdo. There are also numerous symbolic links to lvm that implement specific functionalities
Installed Libraries: libdevmapper.so and liblvm2cmd.so; optional: libdevmapper-event.so, libdevmapper-event-lvm2.so, libdevmapper-event-lvm2mirror.so, libdevmapper-event-lvm2raid.so, libdevmapper-event-lvm2snapshot.so, libdevmapper-event-lvm2thin.so, and libdevmapper-event-lvm2vdo.so
Installed Directories: /etc/lvm and /usr/lib/device-mapper (optional)
Short Descriptions
blkdeactivate is a utility to deactivate block devices
dmeventd (optional) is the Device Mapper event daemon
dmsetup is a low level logical volume management tool
fsadm is a utility used to resize or check filesystem on a device
lvm provides the command-line tools for LVM2. Commands are implemented via symbolic links to this program to manage physical devices (pv), volume groups (vg) and logical volumes (lv*)
lvmdump is a tool used to dump various information concerning LVM2
vgimportclone is used to import a duplicated VG (e.g. hardware snapshot)
libdevmapper.so contains the Device Mapper API functions.
5.7 About Logical Volume Management (LVM)
LVM manages disk drives. It allows multiple drives and partitions to be combined into larger volume groups, assists in making backups through a snapshot, and allows for dynamic volume resizing. It can also provide mirroring similar to a RAID 1 array.
A complete discussion of LVM is beyond the scope of this introduction, but basic concepts are presented below.
To run any of the commands presented here, the LVM2-2.03.18 package must be installed. All commands must be run as the root user.
Management of disks with lvm is accomplished using the following concepts:
physical volumes
These are physical disks or partitions such as /dev/sda3 or /dev/sdb.
volume groups
These are named groups of physical volumes that can be manipulated by the administrator. The number of physical volumes that make up a volume group is arbitrary. Physical volumes can be dynamically added or removed from a volume group.
logical volumes
Volume groups may be subdivided into logical volumes. Each logical volume can then be individually formatted as if it were a regular Linux partition. Logical volumes may be dynamically resized by the administrator according to need.
To give a concrete example, suppose that you have two 2 TB disks. Also suppose a really large amount of space is required for a very large database, mounted on /srv/mysql. This is what the initial set of partitions would look like:
Partition Use Size Partition Type
/dev/sda1 /boot 100MB 83 (Linux)
/dev/sda2 / 10GB 83 (Linux)
/dev/sda3 swap 2GB 82 (Swap)
/dev/sda4 LVM remainder 8e (LVM)
/dev/sdb1 swap 2GB 82 (Swap)
/dev/sdb2 LVM remainder 8e (LVM)
First initialize the physical volumes:
pvcreate /dev/sda4 /dev/sdb2
Note
A full disk can be used as part of a physical volume, but beware that the pvcreate command will destroy any partition information on that disk.
Next create a volume group named lfs-lvm:
vgcreate lfs-lvm /dev/sda4 /dev/sdb2
The status of the volume group can be checked by running the command vgscan. Now create the logical volumes. Since there is about 3900 GB available, leave about 900 GB free for expansion. Note that the logical volume named mysql is larger than any physical disk.
lvcreate --name mysql --size 2500G lfs-lvm
lvcreate --name home --size 500G lfs-lvm
Finally the logical volumes can be formatted and mounted. In this example, the jfs file system (jfsutils-1.1.15) is used for demonstration purposes.
mkfs -t ext4 /dev/lfs-lvm/home
mkfs -t jfs /dev/lfs-lvm/mysql
mount /dev/lfs-lvm/home /home
mkdir -p /srv/mysql
mount /dev/lfs-lvm/mysql /srv/mysql
It may be needed to activate those logical volumes, for them to appear in /dev. They can all be activated at the same time by issuing, as the root user:
vgchange -a y
A LVM logical volume can host a root filesystem, but requires the use of an initramfs (initial RAM file system). The initramfs proposed in the section called “About initramfs” allows to pass the lvm volume in the root= switch of the kernel command line.
If not using an initramfs, there is a race condition in systemd preventing mounting logical volumes through /etc/fstab. You must create a “mount” unit (see systemd.mount(5)) as in the following example, which mounts the /home directory automatically at boot:
cat > /etc/systemd/system/home.mount << EOF
[Unit]
Description=Mount the lvm volume /dev/lfs-lvm/home to /home
[Mount]
What=/dev/lfs-lvm/home
Where=/home
Type=ext4
Options=default
[Install]
WantedBy=multi-user.target
EOF
Note
The name of the unit must be the name of the mount point with the `/’ character replaced by `-‘, omitting the leading one.
Next the unit must be enabled with:
systemctl enable home.mount
For more information about LVM, see the LVM HOWTO and the lvm man pages. A good in-depth guide is available from RedHat®, although it makes sometimes reference to proprietary tools.
5.8 About RAID
The storage technology known as RAID (Redundant Array of Independent Disks) combines multiple physical disks into a logical unit. The drives can generally be combined to provide data redundancy or to extend the size of logical units beyond the capability of the physical disks or both. The technology also allows for providing hardware maintenance without powering down the system.
The types of RAID organization are described in the RAID Wiki.
Note that while RAID provides protection against disk failures, it is not a substitute for backups. A file deleted is still deleted on all the disks of a RAID array. Modern backups are generally done via rsync-3.2.7.
There are three major types of RAID implementation: Hardware RAID, BIOS-based RAID, and Software RAID.
Hardware RAID
Hardware based RAID provides capability through proprietary hardware and data layouts. The control and configuration is generally done via firmware in conjunction with executable programs made available by the device manufacturer. The capabilities are generally supplied via a PCI card, although there are some instances of RAID components integrated in to the motherboard. Hardware RAID may also be available in a stand-alone enclosure.
One advantage of hardware-based RAID is that the drives are offered to the operating system as a logical drive and no operating system dependent configuration is needed.
Disadvantages include difficulties in transferring drives from one system to another, updating firmware, or replacing failed RAID hardware.
BIOS-based RAID
Some computers offer a hardware-like RAID implementation in the system BIOS. Sometime this is referred to as ‘fake’ RAID as the capabilities are generally incorporated into firmware without any hardware acceleration.
The advantages and disadvantages of BIOS-based RAID are generally the same as hardware RAID with the additional disadvantage that there is no hardware acceleration.
In some cases, BIOS-based RAID firmware is enabled by default (e.g. some DELL systems). If software RAID is desired, this option must be explicitly disabled in the BIOS.
Software RAID
Software based RAID is the most flexible form of RAID. It is easy to install and update and provides full capability on all or part of any drives available to the system. In BLFS, the RAID software is found in mdadm-4.2.
Configuring a RAID device is straightforward using mdadm. Generally devices are created in the /dev directory as /dev/mdx where x is an integer.
The first step in creating a RAID array is to use partitioning software such as fdisk or parted-3.5 to define the partitions needed for the array. Usually, there will be one partition on each drive participating in the RAID array, but that is not strictly necessary. For this example, there will be four disk drives: /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd. They will be partitioned as follows:
Partition Size Type Use
sda1: 100 MB fd Linux raid auto /boot (RAID 1) /dev/md0
sda2: 10 GB fd Linux raid auto / (RAID 1) /dev/md1
sda3: 2 GB 83 Linux swap swap
sda4 300 GB fd Linux raid auto /home (RAID 5) /dev/md2
sdb1: 100 MB fd Linux raid auto /boot (RAID 1) /dev/md0
sdb2: 10 GB fd Linux raid auto / (RAID 1) /dev/md1
sdb3: 2 GB 83 Linux swap swap
sdb4 300 GB fd Linux raid auto /home (RAID 5) /dev/md2
sdc1: 12 GB fd Linux raid auto /usr/src (RAID 0) /dev/md3
sdc2: 300 GB fd Linux raid auto /home (RAID 5) /dev/md2
sdd1: 12 GB fd Linux raid auto /usr/src (RAID 0) /dev/md3
sdd2: 300 GB fd Linux raid auto /home (RAID 5) /dev/md2
In this arrangement, a separate boot partition is created as the first small RAID array and a root filesystem as the secong RAID array, both mirrored. The third partition is a large (about 1TB) array for the /home directory. This provides an ability to stripe data across multiple devices, improving speed for both reading and writing large files. Finally, a fourth array is created that concatenates two partitions into a larger device.
Note
All mdadm commands must be run as the root user.
To create these RAID arrays the commands are:
/sbin/mdadm -Cv /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
/sbin/mdadm -Cv /dev/md1 --level=1 --raid-devices=2 /dev/sda2 /dev/sdb2
/sbin/mdadm -Cv /dev/md3 --level=0 --raid-devices=2 /dev/sdc1 /dev/sdd1
/sbin/mdadm -Cv /dev/md2 --level=5 --raid-devices=4 \
/dev/sda4 /dev/sdb4 /dev/sdc2 /dev/sdd2
The devices created can be examined by device. For example, to see the details of /dev/md1, use /sbin/mdadm --detail /dev/md1:
Version : 1.2
Creation Time : Tue Feb 7 17:08:45 2012
Raid Level : raid1
Array Size : 10484664 (10.00 GiB 10.74 GB)
Used Dev Size : 10484664 (10.00 GiB 10.74 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue Feb 7 23:11:53 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : core2-blfs:0 (local to host core2-blfs)
UUID : fcb944a4:9054aeb2:d987d8fe:a89121f8
Events : 17
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
From this point, the partitions can be formatted with the filesystem of choice (e.g. ext3, ext4, xfsprogs-6.1.1, reiserfsprogs-3.6.27, etc). The formatted partitions can then be mounted. The /etc/fstab file can use the devices created for mounting at boot time and the linux command line in /boot/grub/grub.cfg can specify root=/dev/md1.
Note
The swap devices should be specified in the /etc/fstab file as normal. The kernel normally stripes swap data across multiple swap files and should not be made part of a RAID array.
For further options and management details of RAID devices, refer to man mdadm.
Additional details for monitoring RAID arrays and dealing with problems can be found at the Linux RAID Wiki.
5.9 mdadm-4.2
Introduction to mdadm
The mdadm package contains administration tools for software RAID.
This package is known to build and work properly using an LFS 11.3 platform.
Package Information
-
Download (HTTP): https://www.kernel.org/pub/linux/utils/raid/mdadm/mdadm-4.2.tar.xz
-
Download MD5 sum: a304eb0a978ca81045620d06547050a6
-
Download size: 444 KB
-
Estimated disk space required: 5.0 MB
-
Estimated build time: 0.1 SBU
mdadm Dependencies
Optional
A MTA
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/mdadm
Caution
Kernel versions in series 4.1 through 4.4.1 have a broken RAID implementation. Use a kernel with version at or above 4.4.2.
Kernel Configuration
Enable the following options in the kernel configuration and recompile the kernel, if necessary. Only the RAID types desired are required.
Device Drivers --->
[*] Multiple devices driver support (RAID and LVM) ---> [CONFIG_MD]
<*> RAID support [CONFIG_BLK_DEV_MD]
[*] Autodetect RAID arrays during kernel boot [CONFIG_MD_AUTODETECT]
<*/M> Linear (append) mode [CONFIG_MD_LINEAR]
<*/M> RAID-0 (striping) mode [CONFIG_MD_RAID0]
<*/M> RAID-1 (mirroring) mode [CONFIG_MD_RAID1]
<*/M> RAID-10 (mirrored striping) mode [CONFIG_MD_RAID10]
<*/M> RAID-4/RAID-5/RAID-6 mode [CONFIG_MD_RAID456]
Installation of mdadm
Build mdadm by running the following command:
make
This package does not come with a working test suite.
Now, as the root user:
make BINDIR=/usr/sbin install
Command Explanations
make everything: This optional target creates extra programs, particularly a statically-linked version of mdadm. This needs to be manually installed.
--keep-going: Run the tests to the end, even if one or more tests fail.
--logdir=test-logs: Defines the directory where test logs are saved.
--save-logs: Instructs the test suite to save the logs.
--tests=`<test1,test2,...>`: Optional comma separated list of tests to be executed (all tests, if this option is not passed).
Contents
Installed Programs: mdadm and mdmon
Installed Libraries: None
Installed Directory: None
Short Descriptions
mdadm manages MD devices aka Linux Software RAID
mdmon monitors MD external metadata arrays.
5.10 ntfs-3g-2022.10.3
Introduction to Ntfs-3g
Note
A new read-write driver for NTFS, called NTFS3, has been added into the Linux kernel since the 5.15 release. The performance of NTFS3 is much better than ntfs-3g. To enable NTFS3, enable the following options in the kernel configuration and recompile the kernel if necessary:
File systems --->
<*/M> NTFS Read-Write file system support [CONFIG_NTFS3_FS]
To ensure the mount command uses NTFS3 for ntfs partitions, create a wrapper script:
cat > /usr/sbin/mount.ntfs << "EOF" &&
#!/bin/sh
exec mount -t ntfs3 "$@"
EOF
chmod -v 755 /usr/sbin/mount.ntfs
With the kernel support available, ntfs-3g is only needed if you need the utilities from it (for example, to create NTFS filesystems).
The Ntfs-3g package contains a stable, read-write open source driver for NTFS partitions. NTFS partitions are used by most Microsoft operating systems. Ntfs-3g allows you to mount NTFS partitions in read-write mode from your Linux system. It uses the FUSE kernel module to be able to implement NTFS support in userspace. The package also contains various utilities useful for manipulating NTFS partitions.
This package is known to build and work properly using an LFS 11.3 platform.
Package Information
-
Download (HTTP): https://tuxera.com/opensource/ntfs-3g_ntfsprogs-2022.10.3.tgz
-
Download MD5 sum: a038af61be7584b79f8922ff11244090
-
Download size: 1.3 MB
-
Estimated disk space required: 22 MB
-
Estimated build time: 0.2 SBU
Ntfs-3g Dependencies
Optional
fuse 2.x (this disables user mounts)
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/ntfs-3g
Kernel Configuration
Enable the following options in the kernel configuration and recompile the kernel if necessary:
File systems --->
<*/M> FUSE (Filesystem in Userspace) support [CONFIG_FUSE_FS]
Installation of Ntfs-3g
Install Ntfs-3g by running the following commands:
./configure --prefix=/usr \
--disable-static \
--with-fuse=internal \
--docdir=/usr/share/doc/ntfs-3g-2022.10.3 &&
make
This package does not come with a test suite.
Now, as the root user:
make install &&
It’s recommended to use the in-kernel NTFS3 driver for mounting NTFS filesystems, instead of ntfs-3g (see the note at the start of this page). However, if you want to use ntfs-3g to mount the NTFS filesystems anyway, create a symlink for the mount command:
ln -sv ../bin/ntfs-3g /usr/sbin/mount.ntfs &&
ln -sv ntfs-3g.8 /usr/share/man/man8/mount.ntfs.8
Command Explanations
--disable-static: This switch prevents installation of static versions of the libraries.
--with-fuse=internal: This switch dynamically forces ntfs-3g to use an internal copy of the fuse-2.x library. This is required if you wish to allow users to mount NTFS partitions.
--disable-ntfsprogs: Disables installation of various utilities used to manipulate NTFS partitions.
chmod -v 4755 /usr/bin/ntfs-3g: Making mount.ntfs setuid root allows non root users to mount NTFS partitions.
Using Ntfs-3g
To mount a Windows partition at boot time, put a line like this in /etc/fstab:
/dev/sda1 /mnt/windows auto defaults 0 0
To allow users to mount a usb stick with an NTFS filesystem on it, put a line similar to this (change sdc1 to whatever a usb stick would be on your system) in /etc/fstab:
/dev/sdc1 /mnt/usb auto user,noauto,umask=0,utf8 0 0
In order for a user to be able to mount the usb stick, they will need to be able to write to /mnt/usb, so as the root user:
chmod -v 777 /mnt/usb
Contents
Installed Programs: lowntfs-3g, mkfs.ntfs, mkntfs, mount.lowntfs-3g, mount.ntfs, mount.ntfs-3g, ntfs-3g, ntfs-3g.probe, ntfscat, ntfsclone, ntfscluster, ntfscmp, ntfscp, ntfsfix, ntfsinfo, ntfslabel, ntfsls, ntfsresize and ntfsundelete
Installed Library: libntfs-3g.so
Installed Directories: /usr/include/ntfs-3g and /usr/share/doc/ntfs-3g
Short Descriptions
lowntfs-3g is similar to ntfs-3g but uses the Fuse low-level interface
mkfs.ntfs is a symlink to mkntfs
mkntfs creates an NTFS file system
mount.lowntfs-3g is a symlink to lowntfs-3g
mount.ntfs mounts an NTFS filesystem
mount.ntfs-3g is a symbolic link to ntfs-3g
ntfs-3g is an NTFS driver, which can create, remove, rename, move files, directories, hard links, and streams. It can also read and write files, including streams, sparse files and transparently compressed files. It can also handle special files like symbolic links, devices, and FIFOs; moreover it provides standard management of file ownership and permissions, including POSIX ACLs
ntfs-3g.probe tests if an NTFS volume is mountable read only or read-write, and exits with a status value accordingly. The volume can be a block device or image file
ntfscluster identifies files in a specified region of an NTFS volume
ntfscp copies a file to an NTFS volume
ntfsfix fixes common errors and forces Windows to check an NTFS partition
ntfsls lists directory contents on an NTFS filesystem
ntfscat prints NTFS files and streams on the standard output
ntfsclone clones an NTFS filesystem
ntfscmp compares two NTFS filesystems and shows the differences
ntfsinfo dumps a file’s attributes
ntfslabel displays or changes the label on an ntfs file system
ntfsresize resizes an NTFS filesystem without data loss
ntfsundelete recovers a deleted file from an NTFS volume
libntfs-3g.so contains the Ntfs-3g API functions.
5.11 gptfdisk-1.0.9
Introduction to gptfdisk
The gptfdisk package is a set of programs for creation and maintenance of GUID Partition Table (GPT) disk drives. A GPT partitioned disk is required for drives greater than 2 TB and is a modern replacement for legacy PC-BIOS partitioned disk drives that use a Master Boot Record (MBR). The main program, gdisk, has an interface similar to the classic fdisk program.
This package is known to build and work properly using an LFS 11.3 platform.
Package Information
-
Download (HTTP): https://downloads.sourceforge.net/gptfdisk/gptfdisk-1.0.9.tar.gz
-
Download MD5 sum: 01c11ecfa454096543562e3068530e01
-
Download size: 212 KB
-
Estimated disk space required: 2.3 MB
-
Estimated build time: less than 0.1 SBU (add 0.2 SBU for tests)
Additional Downloads
- Recommended patch: https://www.linuxfromscratch.org/patches/blfs/11.3/gptfdisk-1.0.9-convenience-1.patch
gptfdisk Dependencies
Required
Optional
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/gptdisk
Installation of gptfdisk
The gptfdisk package comes with a rudimentary Makefile. First we update it to provide a simple build and install interface and fix the location of a header file as well fixing an issue introduced by a recent version of popt. Install gptfdisk by running the following commands:
patch -Np1 -i ../gptfdisk-1.0.9-convenience-1.patch &&
sed -i 's|ncursesw/||' gptcurses.cc &&
sed -i 's|sbin|usr/sbin|' Makefile &&
sed -i '/UUID_H/s/^.*$/#if defined (_UUID_UUID_H) || defined (_UL_LIBUUID_UUID_H)/' guid.cc &&
sed -i "/device =/s/= \(.*\);/= strdup(\1);/" gptcl.cc &&
make
To test the results, issue: make test.
Now, as the root user:
make install
Command Explanations
patch -Np1 …: This patch modifies the Makefile file so that it provides an “install” target.
Contents
Installed Programs: cgdisk, gdisk, fixparts, and sgdisk
Short Descriptions
cgdisk is an ncurses-based tool for manipulating GPT partitions
gdisk is an interactive text-mode tool for manipulating GPT partitions
fixparts repairs mis-formatted MBR based disk partitions
sgdisk is a partition manipulation program for GPT partitions similar to sfdisk.
5.12 parted-3.5
Introduction to parted
The Parted package is a disk partitioning and partition resizing tool.
This package is known to build and work properly using an LFS 11.3 platform.
Package Information
-
Download (HTTP): https://ftp.gnu.org/gnu/parted/parted-3.5.tar.xz
-
Download (FTP): ftp://ftp.gnu.org/gnu/parted/parted-3.5.tar.xz
-
Download MD5 sum: 336fde60786d5855b3876ee49ef1e6b2
-
Download size: 1.8 MB
-
Estimated disk space required: 33 MB (additional 3 MB for the tests and additional 2 MB for optional PDF and Postscript documentation)
-
Estimated build time: 0.3 SBU (additional 3.6 SBU for the tests)
Parted Dependencies
Recommended
LVM2-2.03.18 (device-mapper, required if building udisks)
Optional
dosfstools-4.2, Pth-2.0.7, texlive-20220321 (or install-tl-unx), and Digest::CRC (for tests)
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/parted
Optional Kernel Configuration for Tests
About 20 % more tests are run if the following kernel module is built:
Device Drivers --->
SCSI device support --->
[*] SCSI low-level drivers ---> [CONFIG_SCSI_LOW_LEVEL]
<M> SCSI debugging host and device simulator [CONFIG_SCSI_DEBUG]
Installation of parted
Install Parted by running the following commands:
./configure --prefix=/usr --disable-static &&
make &&
make -C doc html &&
makeinfo --html -o doc/html doc/parted.texi &&
makeinfo --plaintext -o doc/parted.txt doc/parted.texi
If you have texlive-20220321 installed and wish to create PDF and Postscript documentation issue the following commands:
texi2pdf -o doc/parted.pdf doc/parted.texi &&
texi2dvi -o doc/parted.dvi doc/parted.texi &&
dvips -o doc/parted.ps doc/parted.dvi
To test the results, issue, as the root user:
make check
Note
Many tests are skipped if not run as the root user.
Now, as the root user:
make install &&
install -v -m755 -d /usr/share/doc/parted-3.5/html &&
install -v -m644 doc/html/* \
/usr/share/doc/parted-3.5/html &&
install -v -m644 doc/{FAT,API,parted.{txt,html}} \
/usr/share/doc/parted-3.5
Install the optional PDF and Postscript documentation by issuing the following command as the root user:
install -v -m644 doc/FAT doc/API doc/parted.{pdf,ps,dvi} \
/usr/share/doc/parted-3.5
Command Explanations
--disable-static: This switch prevents installation of static versions of the libraries.
--disable-device-mapper: This option disables device mapper support. Add this parameter if you have not installed LVM2.
Contents
Installed Programs: parted and partprobe
Installed Libraries: libparted.so and libparted-fs-resize.so
Installed Directories: /usr/include/parted and /usr/share/doc/parted-3.5
Short Descriptions
parted is a partition manipulation program
partprobe informs the OS of partition table changes
libparted.so contains the Parted API functions.
5.13 reiserfsprogs-3.6.27
Introduction to reiserfsprogs
The reiserfsprogs package contains various utilities for use with the Reiser file system.
This package is known to build and work properly using an LFS 11.3 platform.
Package Information
-
Download (HTTP): https://www.kernel.org/pub/linux/kernel/people/jeffm/reiserfsprogs/v3.6.27/reiserfsprogs-3.6.27.tar.xz
-
Download MD5 sum: 90c139542725efc6da3a6b1709695395
-
Download size: 439 KB
-
Estimated disk space required: 13 MB
-
Estimated build time: 0.2 SBU
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/reiser
Kernel Configuration
Enable the following option in the kernel configuration and recompile the kernel:
File systems --->
<*/M> Reiserfs support [CONFIG_REISERFS_FS]
Installation of reiserfsprogs
Install reiserfsprogs by running the following commands:
sed -i '/parse_time.h/i #define _GNU_SOURCE' lib/parse_time.c &&
autoreconf -fiv &&
./configure --prefix=/usr &&
make
This package does not come with a test suite.
Now, as the root user:
make install
Command Explanations
sed …: Ensure a variable is defined for use with recent include files.
Contents
Installed Programs: debugreiserfs, mkreiserfs, reiserfsck, reiserfstune, and resize_reiserfs
Installed Library: libreiserfscore.so
Installed Directory: /usr/include/reiserfs
Short Descriptions
debugreiserfs can sometimes help to solve problems with ReiserFS file systems. If it is called without options, it prints the super block of any ReiserFS file system found on the device
mkreiserfs creates a ReiserFS file system
reiserfsck is used to check or repair a ReiserFS file system
reiserfstune is used for tuning the ReiserFS journal. WARNING: Don’t use this utility without first reading the man page thoroughly
resize_reiserfs is used to resize an unmounted ReiserFS file system.
5.14 smartmontools-7.3
Introduction to smartmontools
The smartmontools package contains utility programs (smartctl, smartd) to control/monitor storage systems using the Self-Monitoring, Analysis and Reporting Technology System (S.M.A.R.T.) built into most modern ATA and SCSI disks.
This package is known to build and work properly using an LFS 11.3 platform.
Package Information
-
Download (HTTP): https://downloads.sourceforge.net/smartmontools/smartmontools-7.3.tar.gz
-
Download MD5 sum: 7a71d388124e3cd43abf6586a43cb1ff
-
Download size: 1 MB
-
Estimated disk space required: 30 MB
-
Estimated build time: 0.2 SBU
smartmontools Dependencies
Optional (runtime)
cURL-7.88.1 or Lynx-2.8.9rel.1 or Wget-1.21.3 (download tools), and GnuPG-2.4.0 (encrypted hard disks)
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/smartmontools
Installation of smartmontools
Install smartmontools by running the following commands:
./configure --prefix=/usr \
--sysconfdir=/etc \
--docdir=/usr/share/doc/smartmontools-7.3 &&
make
This package does not come with a test suite.
Now, as the root user:
make install
Configuring smartmontools
Config File
/etc/smartd.conf
Configuration Information
See the embedded comments in /etc/smartd.conf for detailed instructions on customizing the smartd daemon.
Systemd Unit
If you want the smartd daemon to start automatically when the system is booted, enable the systemd unit provided by the package by executing the following command as the root user:
systemctl enable smartd
Contents
Installed Programs: smartctl, smartd, and update-smart-drivedb
Installed Libraries: None
Installed Directories: /usr/share/smartmontools, /usr/share/doc/smartmontools-7.3, and /etc/smartd_warning.d
Short Descriptions
smartctl is the control and monitor utility for SMART Disks
smartd is the SMART disk monitoring daemon
update-smart-drivedb is the update tool for the smartmontools drive database.
5.15 sshfs-3.7.3
Introduction to Sshfs
The Sshfs package contains a filesystem client based on the SSH File Transfer Protocol. This is useful for mounting a remote computer that you have ssh access to as a local filesystem. This allows you to drag and drop files or run shell commands on the remote files as if they were on your local computer.
This package is known to build and work properly using an LFS 11.3 platform.
Package Information
-
Download (HTTP): https://github.com/libfuse/sshfs/releases/download/sshfs-3.7.3/sshfs-3.7.3.tar.xz
-
Download MD5 sum: f704f0d1800bdb5214030a1603e8c6d6
-
Download size: 56 KB
-
Estimated disk space required: 0.9 MB
-
Estimated build time: less than 0.1 SBU
Sshfs Dependencies
Required
Fuse-3.13.1, GLib-2.74.5, and OpenSSH-9.2p1.
Optional
docutils-0.19 (required to build the man page)
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/sshfs
Installation of Sshfs
Install Sshfs by running the following commands:
mkdir build &&
cd build &&
meson --prefix=/usr --buildtype=release .. &&
ninja
This package does not come with a test suite.
Now, as the root user:
ninja install
Using Sshfs
To mount an ssh server you need to be able to log into the server. For example, to mount your remote home folder to the local ~/examplepath (the directory must exist and you must have permissions to write to it):
sshfs example.com:/home/userid ~/examplepath
When you’ve finished work and want to unmount it again:
fusermount3 -u ~/example
You can also mount an sshfs filesystem at boot by adding an entry similar to the following in the /etc/fstab file:
userid@example.com:/path /media/path fuse.sshfs _netdev,IdentityFile=/home/userid/.ssh/id_rsa 0 0
See man 1 sshfs and man 8 mount.fuse3 for all available mount options.
Contents
Installed Program: sshfs
Installed Libraries: None
Installed Directories: None
Short Descriptions
sshfs mounts an ssh server as a local file system.
5.16 xfsprogs-6.1.1
Introduction to xfsprogs
The xfsprogs package contains administration and debugging tools for the XFS file system.
This package is known to build and work properly using an LFS 11.3 platform.
Package Information
-
Download (HTTP): https://www.kernel.org/pub/linux/utils/fs/xfs/xfsprogs/xfsprogs-6.1.1.tar.xz
-
Download MD5 sum: 9befb0877b9f874b0ff16bcc1f858985
-
Download size: 1.3 MB
-
Estimated disk space required: 77 MB
-
Estimated build time: 0.3 SBU (Using parallelism=4)
xfsprogs Dependencies
Required
Optional
ICU-72.1 (for unicode name scanning in xfs_scrub)
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/xfs
Kernel Configuration
Enable the following options in the kernel configuration and recompile the kernel:
File systems --->
<*/M> XFS filesystem support [CONFIG_XFS_FS]
Installation of xfsprogs
Install xfsprogs by running the following commands:
make DEBUG=-DNDEBUG \
INSTALL_USER=root \
INSTALL_GROUP=root
This package does not come with a test suite.
Now, as the root user:
make PKG_DOC_DIR=/usr/share/doc/xfsprogs-6.1.1 install &&
make PKG_DOC_DIR=/usr/share/doc/xfsprogs-6.1.1 install-dev &&
rm -rfv /usr/lib/libhandle.{a,la}
Command Explanations
make DEBUG=-DNDEBUG: Turns off debugging symbols.
INSTALL_USER=root INSTALL_GROUP=root: This sets the owner and group of the installed files.
OPTIMIZER="...": Adding this parameter to the end of the make command overrides the default optimization settings.
Contents
Installed Programs: fsck.xfs, mkfs.xfs, xfs_admin, xfs_bmap, xfs_copy, xfs_db, xfs_estimate, xfs_freeze, xfs_fsr, xfs_growfs, xfs_info, xfs_io, xfs_logprint, xfs_mdrestore, xfs_metadump, xfs_mkfile, xfs_ncheck, xfs_quota, xfs_repair, xfs_rtcp, xfs_scrub, xfs_scrub_all, and xfs_spaceman
Installed Libraries: libhandle.so
Installed Directories: /usr/include/xfs, /usr/lib/xfsprogs, /usr/share/xfsprogs, and /usr/share/doc/xfsprogs-6.1.1
Short Descriptions
fsck.xfs simply exits with a zero status, since XFS partitions are checked at mount time
mkfs.xfs constructs an XFS file system
xfs_admin changes the parameters of an XFS file system
xfs_bmap prints block mapping for an XFS file
xfs_copy copies the contents of an XFS file system to one or more targets in parallel
xfs_estimate for each directory argument, estimates the space that directory would take if it were copied to an XFS filesystem (does not cross mount points)
xfs_db is used to debug an XFS file system
xfs_freeze suspends access to an XFS file system
xfs_fsr applicable only to XFS filesystems, improves the organization of mounted filesystems, the reorganization algorithm operates on one file at a time, compacting or otherwise improving the layout of the file extents (contiguous blocks of file data)
xfs_growfs expands an XFS file system
xfs_info is equivalent to invoking xfs_growfs, but specifying that no change to the file system is to be made
xfs_io is a debugging tool like xfs_db, but is aimed at examining the regular file I/O path rather than the raw XFS volume itself
xfs_logprint prints the log of an XFS file system
xfs_mdrestore restores an XFS metadump image to a filesystem image
xfs_metadump copies XFS filesystem metadata to a file
xfs_mkfile creates an XFS file, padded with zeroes by default
xfs_ncheck generates pathnames from inode numbers for an XFS file system
xfs_quota is a utility for reporting and editing various aspects of filesystem quotas
xfs_repair repairs corrupt or damaged XFS file systems
xfs_rtcp copies a file to the real-time partition on an XFS file system
xfs_scrub checks and repairs the contents of a mounted XFS file system
xfs_scrub_all scrubs all mounted XFS file systems
xfs_spaceman reports and controls free space usage in an XFS file system
libhandle.so contains XFS-specific functions that provide a way to perform certain filesystem operations without using a file descriptor to access filesystem objects.
Packages for UEFI Boot
5.17 efivar-38
Introduction to efivar
The efivar package provides tools and libraries to manipulate EFI variables.
This package is known to build and work properly using an LFS 11.3 platform.
Package Information
-
Download (HTTP): https://github.com/rhboot/efivar/releases/download/38/efivar-38.tar.bz2
-
Download MD5 sum: 243fdbc48440212695cb9c6e6fd0f44f
-
Download size: 316 KB
-
Estimated disk space required: 18 MB
-
Estimated build time: less than 0.1 SBU
Additional Downloads
- Optional patch (Required for 32-bit systems): https://www.linuxfromscratch.org/patches/blfs/11.3/efivar-38-i686-1.patch
efivar Dependencies
Required
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/efivar
Installation of efivar
First, fix an issue in Makefile causing the package to be rebuilt during installation:
sed '/prep :/a\\ttouch prep' -i src/Makefile
Note
This package cannot function properly on a 32-bit system with a 64-bit UEFI implementation. Don’t install this package (or efibootmgr) on 32-bit system unless you are absolutely sure you have a 32-bit UEFI implementation, which is very rare in practice.
If building this package on a 32-bit system, apply a patch:
[ $(getconf LONG_BIT) = 64 ] || patch -Np1 -i ../efivar-38-i686-1.patch
Build efivar with the following commands:
make
The test suite of this package is dangerous. Running it may trigger firmware bugs and make your system unusable without using some special hardware to reprogram the firmware.
Now, as the root user:
make install LIBDIR=/usr/lib
Command Explanations
LIBDIR=/usr/lib: This option overrides the default library directory of the package (/usr/lib64, which is not used by LFS.)
Contents
Installed Programs: efisecdb and efivar
Installed Libraries: libefiboot.so, libefisec.so, and libefivar.so
Installed Directories: /usr/include/efivar
Short Descriptions
efisecdb is an utility for managing UEFI signature lists
efivar is a tool to manipulate UEFI variables
libefiboot.so is a library used by efibootmgr
libefisec.so is a library for managing UEFI signature lists
libefivar.so is a library for the manipulation of EFI variables.
5.18 efibootmgr-18
Introduction to efibootmgr
The efibootmgr package provides tools and libraries to manipulate EFI variables.
This package is known to build and work properly using an LFS 11.3 platform.
Package Information
-
Download (HTTP): https://github.com/rhboot/efibootmgr/archive/18/efibootmgr-18.tar.gz
-
Download MD5 sum: e170147da25e1d5f72721ffc46fe4e06
-
Download size: 48 KB
-
Estimated disk space required: 1.1 MB
-
Estimated build time: less than 0.1 SBU
efibootmgr Dependencies
Required
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/efibootmgr
Installation of efibootmgr
Build efibootmgr with the following commands:
make EFIDIR=LFS EFI_LOADER=grubx64.efi
This package does not have a test suite.
Now, as the root user:
make install EFIDIR=LFS
Command Explanations
EFIDIR=LFS: This option specifies the distro’s subdirectory name under /boot/efi/EFI. The building system of this package needs it to be set explicitly.
EFI_LOADER=grubx64.efi: This option specifies the name of the default EFI boot loader. It is set to match the EFI boot loader provided by GRUB .
Contents
Installed Programs: efibootdump and efibootmgr
Short Descriptions
efibootdump is a tool to display individual UEFI boot options, from a file or an UEFI variable
efibootmgr is a tool to manipulate the UEFI Boot Manager.
5.19 GRUB-2.06 for EFI
Introduction to GRUB
The GRUB package provides GRand Unified Bootloader. In this page it will be built with UEFI support, which is not enabled for GRUB built in LFS.
This package is known to build and work properly using an LFS 11.3 platform.
Package Information
-
Download (HTTP): https://ftp.gnu.org/gnu/grub/grub-2.06.tar.xz
-
Download MD5 sum: cf0fd928b1e5479c8108ee52cb114363
-
Download size: 6.3 MB
-
Estimated disk space required: 137 MB
-
Estimated build time: 1.0 SBU (on 64-bit LFS)
Additional Downloads
Unicode font data used to display GRUB menu
-
Download (HTTP): https://unifoundry.com/pub/unifont/unifont-15.0.01/font-builds/unifont-15.0.01.pcf.gz
-
Download MD5 sum: c371b9b4a8a51228c468cc7efccec098
-
Download size: 1.4 MB
GCC (only needed if building on 32-bit LFS)
-
Download (HTTP): https://ftp.gnu.org/gnu/gcc/gcc-12.2.0/gcc-12.2.0.tar.xz
-
Download MD5 sum: 73bafd0af874439dcdb9fc063b6fb069
-
Download size: 81 MB
GRUB Dependencies
Recommended
efibootmgr-18 (runtime) and FreeType-2.13.0
Optional
User Notes: https://wiki.linuxfromscratch.org/blfs/wiki/grub-efi
Installation of GRUB
First, install font data as the root user:
mkdir -pv /usr/share/fonts/unifont &&
gunzip -c ../unifont-15.0.01.pcf.gz > /usr/share/fonts/unifont/unifont.pcf
Warning
Unset any environment variables which may affect the build:
unset {C,CPP,CXX,LD}FLAGS
Don’t try “tuning” this package with custom compilation flags: this package is a bootloader, with low-level operations in the source code which is likely to be broken by some aggressive optimizations.
Fix an issue causing grub-install to fail when the /boot partition (or the root partition if /boot is not a separate partition) is created by e2fsprogs-1.47.0 or later:
patch -Np1 -i ../grub-2.06-upstream_fixes-1.patch
If you are running a 32-bit LFS, prepare a 64-bit compiler:
case $(uname -m) in i?86 )
tar xf ../gcc-12.2.0.tar.xz
mkdir gcc-12.2.0/build
pushd gcc-12.2.0/build
../configure --prefix=$PWD/../../x86_64-gcc \
--target=x86_64-linux-gnu \
--with-system-zlib \
--enable-languages=c,c++ \
--with-ld=/usr/bin/ld
make all-gcc
make install-gcc
popd
export TARGET_CC=$PWD/x86_64-gcc/bin/x86_64-linux-gnu-gcc
esac
Build GRUB with the following commands:
./configure --prefix=/usr \
--sysconfdir=/etc \
--disable-efiemu \
--enable-grub-mkfont \
--with-platform=efi \
--target=x86_64 \
--disable-werror &&
unset TARGET_CC &&
make
This package does not have a test suite providing meaningful results.
Now, as the root user:
make install &&
mv -v /etc/bash_completion.d/grub /usr/share/bash-completion/completions
Command Explanations
--enable-grub-mkfont: Build the tool named grub-mkfont to generate the font file for the boot loader from the font data we’ve installed.
Warning
If the recommended dependency FreeType-2.13.0 is not installed, it is possible to omit this option and build GRUB. However, if grub-mkfont is not built, or the unicode font data is not available at the time GRUB is built, GRUB won’t install any font for the boot loader. The GRUB boot menu will be displayed using a coarse font and the early stage of kernel initialization will be in “blind mode” — you can’t see any kernel messages before the graphics card driver is initialized. It will be very difficult to diagnose some boot issues, especially if the graphics driver is built as module.
--with-platform=efi: Ensures building GRUB with EFI enabled.
--target=x86_64: Ensures building GRUB for x86_64 even if building on a 32-bit LFS system. Most EFI firmwares on x86_64 does not support 32-bit bootloaders.
--target=i386: A few 32-bit x86 platforms have EFI support. And, some x86_64 platforms have a 32-bit EFI implementation, but they are very old and rare. Use this instead of --target=x86_64 if you are absolutely sure that LFS is running on such a system.
Configuring GRUB
Using GRUB to make the LFS system bootable on UEFI platform will be discussed in Using GRUB to Set Up the Boot Process with UEFI.
Contents
See the page for GRUB in LFS book.
5.20 Using GRUB to Set Up the Boot Process with UEFI
Turn Off Secure Boot
BLFS does not have the essential packages to support Secure Boot. To set up the boot process with GRUB and UEFI in BLFS, Secure Boot must be turned off from the configuration interface of the firmware. Read the documentation provided by the manufacturer of your system to find out how.
Create an Emergency Boot Disk
Ensure that an emergency boot disk is ready to “rescue” the system in case the system becomes un-bootable. To make an emergency boot disk with GRUB for an EFI based system, find a spare USB flash drive and create a vfat file system on it. Install dosfstools-4.2 first, then as the root user:
Warning
The following command will erase all directories and files in the partition. Make sure your USB flash drive contains no data which will be needed, and change sdx1 to the device node corresponding to the first partition of the USB flash drive. Be careful not to overwrite your hard drive with a typo!
mkfs.vfat /dev/sdx1
Still as the root user, use the fdisk utility to set the first partition of the USB flash drive to be an “EFI system” partition (change sdx to the device node corresponding to your USB flash drive):
fdisk /dev/sdx
Welcome to fdisk (util-linux 2.38.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): t
Partition number (1-9, default 9): 1
Partition type or alias (type L to list all): uefi
Changed type of partition 'Linux filesystem' to 'EFI System'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Still as the root user, create a mount point for the EFI partition on the USB flash drive and mount it:
mkdir -pv /mnt/rescue &&
mount -v -t vfat /dev/sdx1 /mnt/rescue
Install GRUB for EFI on the partition:
grub-install --target=x86_64-efi --removable --efi-directory=/mnt/rescue --boot-directory=/mnt/rescue
Unmount the partition:
umount /mnt/rescue
Now the USB flash drive can be used as an emergency boot disk on any x86-64 UEFI platform. It will boot the system and show the GRUB shell. Then you can type commands to boot your operating system from the hard drive. To learn how to select the boot device, read the manual of your motherboard or laptop.
Kernel Configuration for UEFI support
Enable the following options in the kernel configuration and recompile the kernel if necessary:
Processor type and features --->
[*] EFI runtime service support [CONFIG_EFI]
[*] EFI stub support [CONFIG_EFI_STUB]
Enable the block layer --->
Partition Types --->
[*] Advanced partition selection [CONFIG_PARTITION_ADVANCED]
[*] EFI GUID Partition support [CONFIG_EFI_PARTITION]
Device Drivers --->
Firmware Drivers --->
[*] Mark VGA/VBE/EFI FB as generic system framebuffer [CONFIG_SYSFB_SIMPLEFB]
Graphics support --->
<*> Direct Rendering Manager [CONFIG_DRM]
[*] Enable legacy fbdev support for your modesetting driver [CONFIG_DRM_FBDEV_EMULATION]
<*> Simple framebuffer driver [CONFIG_DRM_SIMPLEDRM]
Frame buffer Devices --->
<*> Support for frame buffer devices ---> [CONFIG_FB]
Console display driver support --->
[*] Framebuffer Console support [CONFIG_FRAMEBUFFER_CONSOLE]
File systems --->
<DOS/FAT/EXFAT/NT Filesystems --->
<*/M> VFAT (Windows-95) fs support [CONFIG_VFAT_FS]
Pseudo filesystems --->
<*/M> EFI Variable filesystem [CONFIG_EFIVAR_FS]
The meaning of the configuration options:
CONFIG_EFI_STUB
On EFI systems, GRUB boots the Linux kernel by invoking the EFI firmware to load it as an EFI application. So, EFI stub is needed to wrap the kernel as an EFI application.
CONFIG_SYSFB_SIMPLEFB, CONFIG_DRM, CONFIG_DRM_FBDEV_EMULATION, CONFIG_DRM_SIMPLEDRM, CONFIG_FB, and CONFIG_FRAMEBUFFER_CONSOLE
The combination of these options provides the Linux console support on top of the UEFI framebuffer. To allow the kernel to print debug messages at an early boot stage, they shouldn’t be built as kernel modules unless an initramfs will be used.
Find or Create the EFI System Partition
On EFI based systems, the bootloaders are installed in a special FAT32 partition called an EFI System Partition (ESP). If your system supports EFI, and a recent version of some Linux distribution or Windows is pre-installed, it’s likely that the ESP has already been created. As the root user, list all the partitions on your hard drive (replace sda with the device corresponding to the appropriate hard drive):
fdisk -l /dev/sda
The “Type” column of the ESP should be EFI System.
If the system or the hard drive is new, or it’s the first installation of a UEFI-booted OS on the system, the ESP may not exist. In that case, create a new partition, make a vfat file system on it, and set the partition type to “EFI system”. See the instructions for the emergency boot device above as a reference.
Warning
Some (old) UEFI implementations may require the ESP to be the first partition on the disk.
Now, as the root user, create the mount point for the ESP, and mount it (replace sda1 with the device node corresponding to the ESP):
mkdir -pv /boot/efi &&
mount -v -t vfat /dev/sda1 /boot/efi
Add an entry for the ESP in /etc/fstab, so it will be mounted automatically during system boot:
cat >> /etc/fstab << EOF
/dev/sda1 /boot/efi vfat defaults 0 1
EOF
Minimal Boot Configuration with GRUB and EFI
On UEFI based systems, GRUB works by installing an EFI application (a special kind of executable) into the ESP. The EFI firmware will search boot loaders in EFI applications from boot entries recorded in EFI variables, and additionally a hardcoded path EFI/BOOT/BOOTX64.EFI. Normally, a boot loader should be installed into a custom path and the path should be recorded in the EFI variables. The use of the hardcoded path should be avoided if possible. However, in some cases we have to use the hardcoded path:
-
The system is not booted with EFI yet, making EFI variables inaccessible.
-
The EFI firmware is 64-bit but the LFS system is 32-bit, making EFI variables inaccessible because the kernel cannot invoke EFI runtime services with a different virtual address length.
-
LFS is built for a Live USB, so we cannot rely on EFI variables, which are stored in NVRAM or EEPROM on the local machine.
-
You are unable or unwilling to install the efibootmgr for manipulating boot entries in EFI variables.
In these cases, follow these instructions to install the GRUB EFI application into the hardcoded path and make a minimal boot configuration. Otherwise it’s better to skip ahead and set up the boot configuration normally.
To install GRUB with the EFI application in the hardcoded path EFI/BOOT/BOOTX64.EFI, first ensure the boot partition is mounted at /boot and the ESP is mounted at /boot/efi. Then, as the root user, run the command:
Note
This command will overwrite /boot/efi/EFI/BOOT/BOOTX64.EFI. It may break a bootloader already installed there. Back it up if you are not sure.
grub-install --target=x86_64-efi --removable
This command will install the GRUB EFI application into the hardcoded path /boot/efi/EFI/BOOT/BOOTX64.EFI, so the EFI firmware can find and load it. The remaining GRUB files are installed in the /boot/grub directory and will be loaded by BOOTX64.EFI during system boot.
Note
The EFI firmware usually prefers the EFI applications with a path stored in EFI variables to the EFI application at the hardcoded path. So you may need to invoke the boot selection menu or firmware setting interface to select the newly installed GRUB manually on the next boot. Read the manual of your motherboard or laptop to learn how.
If you’ve followed the instructions in this section and set up a minimal boot configuration, now skip ahead to “Creating the GRUB Configuration File”.
Mount the EFI Variable File System
The installation of GRUB on a UEFI platform requires that the EFI Variable file system, efivarfs, is mounted. As the root user, mount it if it’s not already mounted:
mountpoint /sys/firmware/efi/efivars || mount -v -t efivarfs efivarfs /sys/firmware/efi/efivars
Note
If the system is booted with UEFI and systemd, efivarfs will be mounted automatically. However, in the LFS chroot environment it still needs to be mounted manually.
Warning
If the system is not booted with UEFI, the directory /sys/firmware/efi will be missing. In this case you should boot the system in UEFI mode with the emergency boot disk or using a minimal boot configuration created as above, then mount efivarfs and continue.
Setting Up the Configuration
On UEFI based systems, GRUB works by installing an EFI application (a special kind of executable) into /boot/efi/EFI/[id]/grubx64.efi, where /boot/efi is the mount point of the ESP, and [id] is replaced with an identifier specified in the grub-install command line. GRUB will create an entry in the EFI variables containing the path EFI/[id]/grubx64.efi so the EFI firmware can find grubx64.efi and load it.
grubx64.efi is very lightweight (136 KB with GRUB-2.06) so it will not use much space in the ESP. A typical ESP size is 100 MB (for Windows boot manager, which uses about 50 MB in the ESP). Once grubx64.efi has been loaded by the firmware, it will load GRUB modules from the boot partition. The default location is /boot/grub.
As the root user, install the GRUB files into /boot/efi/EFI/LFS/grubx64.efi and /boot/grub. Then set up the boot entry in the EFI variables:
grub-install --bootloader-id=LFS --recheck
If the installation is successful, the output should be:
Installing for x86_64-efi platform.
Installation finished. No error reported.
| Issue the **efibootmgr | cut -f 1** command to recheck the EFI boot configuration. An example of the output is: |
BootCurrent: 0000
Timeout: 1 seconds
BootOrder: 0005,0000,0002,0001,0003,0004
Boot0000* ARCH
Boot0001* UEFI:CD/DVD Drive
Boot0002* Windows Boot Manager
Boot0003* UEFI:Removable Device
Boot0004* UEFI:Network Device
Boot0005* LFS
Note that 0005 is the first in the BootOrder, and Boot0005 is LFS. This means that on the next boot, the version of GRUB installed by LFS will be used to boot the system.
Creating the GRUB Configuration File
Generate /boot/grub/grub.cfg to configure the boot menu of GRUB:
BootCurrent: 0000
Timeout: 1 seconds
BootOrder: 0005,0000,0002,0001,0003,0004
Boot0000* ARCH
Boot0001* UEFI:CD/DVD Drive
Boot0002* Windows Boot Manager
Boot0003* UEFI:Removable Device
Boot0004* UEFI:Network Device
Boot0005* LFS
(hd0,2), sda2, and 6.1.11-lfs-11.3 must match your configuration.
Note
From GRUB’s perspective, the files are relative to the partitions used. If you used a separate /boot partition, remove /boot from the above paths (to kernel and to unicode.pf2). You will also need to change the “set root” line to point to the boot partition.
The Firmware Setup entry can be used to enter the configuration interface provided by the firmware (sometimes called “BIOS configuration”).
Dual-booting with Windows
Add a menu entry for Windows into grub.cfg:
cat >> /boot/grub/grub.cfg << EOF
# Begin Windows addition
menuentry "Windows 11" {
insmod fat
insmod chain
set root=(hd0,1)
chainloader /EFI/Microsoft/Boot/bootmgfw.efi
}
EOF
(hd0,1) should be replaced with the GRUB designated name for the ESP. The chainloader directive can be used to tell GRUB to run another EFI executable, in this case the Windows Boot Manager. You may put more usable tools in EFI executable format (for example, an EFI shell) into the ESP and create GRUB entries for them, as well.