A practical explanation of how BIOS/UEFI booting works and how to recover broken Windows and Linux bootloaders.
This breakdown primarily explains how the boot process works and how bootloaders interact with firmware, partitions, and the operating system.
It also includes practical recovery procedures for common boot failures. The focus is educational first, with repair steps provided as examples of how these mechanisms are used in practice.
If you want to understand the concepts, read from the beginning. Otherwise, feel free to jump directly to the Common failure scenarios section, which contains practical recovery procedures.
If you find any errors, mistakes, or typos, or want to help improve or restructure any section, feel free to open an issue or submit a pull request.
If you found this useful, consider starring the repository.
Things you might need to have in hand (if you're here for the recovery process)
- Working Linux machine or a live bootable drive
- Windows bootable drive
-
- Explaning both partitioning models
- GPT boot tree
- "MBR & Legacy" boot tree
-
- Diskpart
- Basic knowledge e.g. listing disks, selecting disk/partition. etc
- Diskpart (Advanced)
- Resizing partitions
- Shrinking
- Extending
- Creating a partition
- Formatting a partition
- Assigning a letter
- Deleting a partition
- Resizing partitions
- Bootloader recovery
- GPT & UEFI bootloader recovery
- Missing EFI partition restoration
- Creating an EFI partition screenshots
- GPT bootloader recovery screenshots
- MBR & Legacy bootloader recovery
- Legacy bootloader recovery screenshots
- GPT & UEFI bootloader recovery
- Diskpart
Note: The "Common failure scenarios" section will be completed later.
- fs / rootfs → root filesystem (the Linux root directory
/) - ESP → EFI System Partition, the partition from which UEFI loads EFI boot entries
-
MBR
The older scheme for partitioning disks (personally I hate it & here's why):
MBR supports only four primary partitions. To boot from an MBR disk, the boot partition must be one of these primary partitions. Each one of the four primary partitions can be an extended partition who will be able to hold more than 100 logical partitions, however your device BIOS wouldn't be able to boot from logical partitions.
When your device boots up it'll look for the active primary partition and would boot from that, there's no other way available for booting from a legacy installed system. However there's a hybrid way of having more than 4 bootable systems or even having to activate a partition to boot from, which is to just make an EFI partition and boot from it instead. By using UEFI mode inside of your device BIOS instead of legacy. More on how EFI works later but a quick brief -> "EFI can hold more than one entry which you an select where you wanna boot from your BIOS".
A side note would be that MBR relies on the first 512 bytes of the disk which hold some information such as: Which primary partition is active?, and so BIOS reads these bytes and knows where to boot from.
Bytes What's there? 0–445 Bootloader code 446–509 Partition table (4 primary entries × 16 bytes each) 510–511 Boot signature MBR Legacy booting tree:
Before reading the tree, let this be your mental model for it BIOS └── MBR (find partition) └── Boot Sector (understand filesystem) └── Bootloader (complex logic) └── OS kernel The tree: Turned on you device? └── BIOS initializes hardware └── BIOS reads the first 512 bytes of the first disk on the BIOS boot order and executes them. └── This code then looks for the currently active partition and loads that partition's boot sector (aka Volume Boot Record or VBR for short). VBR is just a tiny program that's specified by each operating system and does the following ├── Windows │ └── The executed VBR │ ├── Reads the NTFS metadata │ └── Locates the file `bootmgr` │ └── Loads `bootmgr` into memory │ └── Jumps to `bootmgr` (executes that `bootmgr`) │ └── `bootmgr` runs winload.exe and then loads the Windows kernel. └── Linux └── The executed VBR └── Loads a set of hardcoded sector addresses in the beginning of the active partition (Instead of scanning the fs searching for the right addresses.) (In short, this stage loads a specified set of bytes that are under the active partition. and then executes them. that set of bytes is the "core.img" which are the GRUB core.) └── The GRUB core is actually fs aware, and can load whatever it wants from the GRUB files. e.g. /boot/grub files, like GRUB modules. └── GRUB then shows you these nice entries which are your installed operating systems. └── After choosing a linux system for this example └── Loads the linux kernel and initramfs into RAM └── Executes the kernel P.S: """ While the VBR code is OS-specific it's also filesystem-specific. For example, NTFS has a specific VBR structure; FAT32 fs has a different one. """ -
GPT
The modern scheme for partitioning disks! (personally I love it & again here's why):
So I just said that MBR only can handle four primary partition right? GPT on the other hand doesn't have that hard limit at all which means it can't depend on activating a partition to boot from (makes sense right?). instead you can create up to 128 partitions which all of them gonna be a primary ones, in fact you can't create logical partitions nor extended ones inside of a GPT formatted disk.
In GPT partitioned disk BIOS cannot read something like the MBR first 512 bytes who holds the disk info as long as which partition are active, instead UEFI firmware scans the disk for partitions marked with the ESP flag (EFI SYSTEM PARTITION). then looks inside them for any EFI entries. typically BIOS only loads the first ESP EFI entries per disk but we can totally have multiple ESPs and have whatever amount of entries we need inside of them. although it would be pain to manage them, but some BIOSs give access to add an entry from a disk directly inside of the BIOS.
GPT and UEFI booting tree:
Turned on your device? └── UEFI firmware initializes hardware ├── UEFI reads entries from NVRAM, each NVRAM entry is just a pointer to a file path on the (EFI SYSTEM PARTITION) aka ESP └── No NVRAM entries? scans the first ESP looking for entries. some motherboards scan the ESP on each boot anyway. └── Then executes the first .efi in the boot order ├── Windows │ └── UEFI loads `\EFI\Microsoft\Boot\bootmgfw.efi` from the ESP │ └── `bootmgfw.efi` executes `winload.efi` │ └── `winload.efi` loads windows kernel and drivers into RAM and executes the windows kernel. └── Linux └── UEFI loads `\EFI\GRUB\grubx64.efi` or bootloader-specific .efi file from the ESP e.g. GRUB └── GRUB └── GRUB shows OS selection menu └── Choosing Linux for this example └── Loads the Linux kernel and initramfs into RAM and then executes the kernel P.S: """ Unlike MBR/Legacy BIOS, UEFI does not use fixed sectors for early boot stages. All bootloaders are normal EFI executables stored in the EFI System Partition (usually fat32). """
-
How does it work?:
An EFI System Partition (ESP) is a FAT32 partition marked with the ESP flag. UEFI firmware scans this partition for EFI executables (.EFI files), that's pretty much it! lmao. I mean. really it doesn't have any magic or weird ways of handling that partition. Just a simple FAT32 partition with a flag but it have a specific scheme when it comes to arranging the files inside of it.
And here's example of what an EFI partition contains:
EFI/ ├── *SYSTEM-NAME OR WHATEVER REALLY (THIS ARE THE BOOT ENTRY WHO WILL SHOW UP IN THE BIOS)*/ │ ├── *MIGHT HAVE THE BOOT.EFI DIRECTLY HERE THOUGH* │ └── Boot/ │ ├── boot.EFI │ └── *WE CAN HAVE WHATEVER FILES NEEDED FOR THE EFI HERE OR OUTSIDE OF THIS FOLDER .. DOESN'T REALLY MATTER* │ ├── *SOME EXAMPLES* │ ├── Microsoft/ │ └── Boot/ │ ├── BCD │ ├── BCD.LOG │ ├── bg-BG/ │ │ ├── bootmgfw.EFI.mui │ │ └── bootmgfw.EFI.mui │ ├── bootmgfw.EFI │ ├── bootmgr.EFI │ └── *ETC* │ ├── debian/ │ ├── BOOTX64.CSV │ ├── fbx64.EFI │ ├── grub.cfg │ ├── grubx64.EFI │ ├── mmx64.EFI │ ├── background.png │ └── shimx64.EFI │ └── Boot/ ├── bootx64.EFI ├── fbx64.EFI └── mmx64.EFIWhile you can make an EFI entry yourself, typically its created automatically while installing any OS.
-
NVRAM?
NVRAM handles a really important role when it comes to booting into something, Each operating system registers its EFI loader in NVRAM as bootable entry. which are pretty much just a pointer that points onto a specific disk alongside the .EFI file path inside of that disk.
HD(1,GPT,XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX <- GUID (partition ID),0x800,0x100000)/EFI/ubuntu/grubx64.EFI -> (Simplified)Sometimes if your entry isn't inside of NVRAM it wouldn't show up as a bootable option, resetting the BIOS typically resets the NVRAM and thus the BIOS searches the ESPs again. and when you boot from any entry it may register that entry on the NVRAM itself. But mostly with modern firmware it scans ESPs on each boot anyway.
Using EFIbootmgr in Linux adds an entry on the NVRAM
EFIbootmgr --create --disk /dev/*disk-path* --part 1 --loader \EFI\debian\grubx64.EFI --label "Debian"
Windows:
bcdboot C:\Windows /l en-us /s S: /f UEFINVRAM is quite annoying when handling multiple boot records, or removing/adding entry manually. it can override paths and make quite weird behaviors.
Diskpart (Common use cases)
list disk
Outputs something like:
select disk N
(Where N is the desired disk number).
list volume
list partition
Outputs something like:
When listing volumes instead of partitions though, you would have much more informations about each partition than you would have with listing partitions. Such as "File system type (FS For short)", "Type", "Status", "Partition letter (Ltr)", "Info" which are quite useful.
Also, one of them may fail depends on the context of the disk ... (I belive).
select volume N
select partition N
And then, you can either activate them or assign a letter for them in case you were doing some recovery stuff:
activate (in case your disk is MBR, Activate the partition that are above the C partition. usually have some flag/type of Reserved)
assign letter=N (Replace N with whatever letter you want, usually C for Windows partition and S for the EFI partition).
For short, you can use (in all use cases)
vol (instead of volume)
part (instead of partition)
sel (instead of select)
Diskpart (Advanced)
Take caution!
Using any of these commands without proper knowledge about how they act, could cause lose of data! Even if command (A) does (B), it doesn't mean it'll always do (B). Sometimes errors happens and therefore command (A) does (G) instead of (B). It's recommended to always try tools on temporary hardware (e.g. Virtual machines) before trying them on your actual hardware.
Select your desired partition to resize and do one of the following actions:
Shrinking:
shrink querymax
Would let you know the maximum available shrink size. And using
shrink desired=N
Where's N is your desired space to take out from the partition in megabytes.
After shrinking a partition, listing the partitions would let you see that it already has been shrunk down. But wouldn't let you know that there's free space.
Listing the disks would let you know the available free space though.
Extending:
extend size=N
Where's N is your desired space to add in the currently selected partition in megabytes. Or
extend
To add all the free space to the currently selected partition.
Unfortunately, diskpart doesn't support creating partitions before the available free space, it must be created before it.
But you can always use Gparted, MiniTool Partition Wizard, or any partitions program you like. (Gparted and the MiniTool have a nice and easy to use GUI that would make it MUCH easier to deal with any of this.)
create partition primary size=N
Where's N is your desired new parition size.
You can use efi, extended, logical or MSR (aka Microsoft reserved partition) instead of "primary" if you want to. But note that GPT partitioned disks cannot have extended or logical partitions.
P.S: After creating a partition, it would be automatically selected. You can verify by using list part. And you may need to format this partition and assign a letter for it in order to use it.
IMPORTANT double check the currently selected partition! using the following command will format the whole partition, you can't reverse it!
format fs=N quick
Where's N is your desired filesystem type, choose from [ntfs, fat32, exfat]. And you can add label=Your Label at the end of the previous command to set a label.
To be able to handle the currently selected partition outside diskpart, you need to assign a letter to it e.g. C, S or whatever really.
assign letter=N
Where's N is your desired letter.
C is the most used letter for the windows installation partition, S for the boot partition e.g. the EFI partition, or system reserved for Legacy.
IMPORTANT double check the currently selected partition! using the following command will delete it, you can't reverse it!
delete partition
To force delete a partition, use
delete partition override
"override" flag is used to delete protected partitions e.g. Recovery, MSR. etc.
Boot into Windows bootable thumb drive
│
│
└── select the language as you wish
│
│
└── select/click repair my pc
│
│
└── Troubleshoot/Advanced options
│
│
└── click on Command Prompt
diskpart
list disk
If next to your windows installation disk, you will found an "" symbol under GPT if it was a GPT partitioned disk. If you didn't find any "" then it's an MBR disk.
#(This comment is for the next command)
diskpart
list disk
#(Where N is your disk number, select the disk that contains Windows)
sel disk N
list vol
#(Where N is the volume that contains Windows (C:/), You can identify it by its size aka The drive C:/ size, and its Fs which are NTFS)
sel vol N
#(You should see * before the volume you selected above)
list vol
assign letter=C
#(Where N is your EFI partition number, You'll see FAT32 under its Fs (file system))
sel vol N
assign letter=S
exit
diskpart
list disk
sel disk N
list vol
sel vol N
assign letter=C
sel vol N
assign letter=S
exit
bcdboot C:\Windows /s S: /f UEFI
Don't worry, we can make it ourselves.
diskpart
list disk #(Next to each disk, you'll see the free space. If your disk have any, select your disk and jump straight into the creating EFI part.)
sel disk N #(Change N with your installation disk number.)
list part
sel part N #(Change N with the partition that you want to shrink in order for making the new EFI partition.)
shrink querymax #(Use to know the avilable free space inside that partition)
shrink desired=1024 #(I like to make my EFI partitions to be 1GB in size, but you can make it smaller if you want, change the 1024 to be whatever size in megabytes, never tried something less than 512. So I can't really tell if it work.)
create partition efi size=1024
list vol #(Make sure that there's a new partition with the size 1024MB or the size you choose above, and that it's FS is RAW.)
format fs=fat32 quick #(Make sure that the currently selected partition is the one we was talking about on the `list vol` command. You can do so by looking for the symbol `*` before the Volume list e.g. "* Volume 4 FAT32 Partition 1024MB Healthy Hidden")
assign letter=S
And then follow back with this section, But skip the EFI partition assigning letter step, since we already did.
Without comments:
diskpart
list disk
sel disk N
list part
sel part N
shrink querymax
shrink desired=1024
create partition efi size=1024
list vol
format fs=fat32 quick
assign letter=S
Further explanation:
We need some free space in order to create our new EFI partition right?
And so, we would need to shrink a partiton or delete one in order to have some free space. after doing so
We use
create partition efi size=1024
To create a new ESP flagged partition with the size of 1024 megabytes or basically 1GB.
Then we select that new partition if it's not already selected and we format it to FAT32 since that's the default format for EFI partitions
format fs=fat32 quick
AS ALWAYS, DOUBLE CHECK YOUR SELECTED PARTITION BEFORE FORMATTING OR DELETING!!!
then we assign the letter S for that EFI partition to handle it with
bcdboot C:\Windows /s S: /f UEFI
Where the C is the windows installation partition, and the S is the EFI partition. For more refrence on diskpart go to Diskpart (Advanced) section
#(This comment is for the next command)
diskpart
list disk
#(Where N is your disk number, select the disk that contains your Windows installation)
sel disk N
list vol
#(Where N is the volume that contains Windows (C:/), You can identify it by its size aka The drive C:/ size, and its Fs which are NTFS)
sel vol N
#(You should see * before the volume you selected above)
list vol
assign letter=C
#(Where N is your BOOT partition number)
sel vol N
assign letter=S
#(Where N is your BOOT partition)
sel vol N
active
exit
diskpart
list disk
sel disk N
list vol
sel vol N
list vol
assign letter=C
sel vol N
assign letter=S
sel vol N
active
exit
IT CAN BE THE Windows C: DRIVE if the BOOT partition is the same as C: then skip assigning the letter S, assign the letter C ant that would be enough.
Typically would have NTFS under Fs when doing (list vol), System Reserved under its type when doing (list part).
much smaller than the C partition. Have to be a primary partition.
unlike EFI where it was easy to determinate which are which, with legacy installation both partitions are NTFS :(
#(Where the first C: is your Windows C and the next C: are your BOOT partition (aka S: if you had one))
bcdboot C:\Windows /s C: /f BIOS
OR
bcdboot C:\Windows /s S: /f BIOS
Take caution with these, as I never tried them before. while fixmbr should fix the BOOT entry, fixboot fixes the BOOT for the active partition and rebuildbcd scans for Windows installations and rebuilds the BOOT. I never needed them before and hence why I don't really know how they act.
#(writes a new MBR) TODO
bootrec /fixmbr
#(fixes the BOOT for the active partition)
bootrec /fixboot
#(scans for Windows installations and rebuilds the BOOT)
bootrec /rebuildbcd
/boot partition/directory, It can be either a separated partition or included inside of the rootfs.
GPT/MBR & UEFI:
Luckily modern grub and bootloaders are able to handle encrypted file systems, therefore there's no need to make a separated partition for the boot anymore but from my pov its much easier to handle the /boot if it was separated. since the kernel would handle the encryption instead of grub since it doesn't make sense to encrypt the boot partition. (I just grub decryption)
MBR & legacy:
Since MBR handles the booting process differently
BIOS reads the first 512 bytes -> these bytes are executed to load the bootloader e.g. grub -> grub handles the rest. But how come the BIOS would be able to load grub if it doesn't have access to the whole root partition where the /boot lives? Thus a separated boot partition are required.
But what is it though?
As its name says, its the point where grub or the BIOS loads to handle the boot process before the system itself be able to take over.
Mainly it contains two important files "initrd.img", "vmlinuz" both are static files, they usually doesn't change unless you rebuild them or the system preform an update to the kernel.
There's also a directory called "EFI" which is a directory that being used as a mount point to the ESP (aka EFI partition), both /boot and /boot/EFI aren't required for anything after the boot process. literally nothing access or use them unless the system makes an update to kernel or grub.
So, what exactly happens with these on each boot?
First, grub or the current bootloader who took over from BIOS/UEFI loads the kernel which are the "vmlinuz" file then The kernel mounts initramfs as the temporary root filesystem and runs /init. Which is an executable file that lives inside the initrd and prepares every thing for the real root file system to take over. just like chroot but it literally are discarded after the real system take over.
vmlinuz are your system kernel, as the kernel its the same as what your real system uses. it doesn't change ever after it got loaded by the bootloader to the ram. However vmlinuz (the file) aren't being touched anymore. it just went to the ram and got excuted where it would live till the end of this boot cycle.
initrd.img (aka init ram file system (initramfs for short)) are the early userspace. the kernel mounts it as root, then it loads the required kernel modules, tools, drivers and executes the early boot logic from fs.
Here's a low quality tree view for what happens on each boot:
Turned your device on?
└── UEFI/BIOS
└── Loads the bootloader (aka grub) from EFI/Legacy
├── Bootloader loads (To the ram)
│ ├── vmlinuz (The kernel)
│ └── initrd.img (initramfs)
└── Kernel
│ ├── Decompresses initramfs into RAM
│ ├── Mounts the decompressed initramfs as /
│ └── Executes /init inside initramfs
├── /init (initramfs) -> (The image initramfs who's currently mounted as / contains these tools/kernel modules in it)
│ ├── Loads required kernel modules (loadable .ko files)
│ ├── Detects hardware
│ ├── Unlocks encryption (if any)
│ ├── Activates LVM (if any)
│ ├── Mounts the real root file system
│ └── Executes "exec switch_root /sysroot /sbin/init" (Think of this like chroot to the real root fs)
│ ├── Moves /proc, /sys, /dev to the new root (This part are really important, since /dev /sys /proc .. are just kernel paths not presistant directories)
│ ├── Makes the real root filesystem become /
│ ├── Deletes the old initramfs filesystem
│ └── Executes the specified program (usually /sbin/init)
│ └── initramfs memory is freed
└── /sbin/init
└── Takes over and starts the real system init process (aka systemd, OpenRC, SysVinit, Runit, Upstart ... etc)
The first step would be booting into another working Linux system (Maybe another device that runs your same distro or a bootable installation drive for it, its not necessary for your working system to be the same as your broken one. but avoid using different architecture since you won't be able to chroot into the broken one without emulation)
connect your disk that contains your Linux system. Then open the terminal
Unencrypted Linux file system
We would need an identifier for the partition who contains the /boot directory and by doing:
sudo lsblk
#(OR)
sudo blkidYou will get something like:
lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 20G 0 disk
└─sda1 8:1 0 20G 0 part /
sdb 8:16 0 447.1G 0 disk
├─sdb1 8:17 0 16M 0 part
├─sdb2 8:18 0 119.2G 0 part
├─sdb3 8:19 0 1000M 0 part
├─sdb4 8:20 0 234.2G 0 part
└─sdb5 8:21 0 953M 0 part
sr0 11:0 1 1024M 0 rom
blkid:
/dev/sda1: UUID="a8483fab3b0d" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="d44aac11-11"
/dev/sdb1: PARTLABEL="Microsoft reserved partition" PARTUUID="f6c6aa7eede9"
/dev/sdb2: BLOCK_SIZE="512" UUID="92DA8218" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="043f9f4e4029"
/dev/sdb3: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="3S52-D800" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI" PARTUUID="97e02c48aaa2"
/dev/sdb4: UUID="e4285b2021fd" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="Debian" PARTUUID="15effaeb6f80"
/dev/sdb5: LABEL="DebianBoot" UUID="3a352da4027b" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="DebianBoot" PARTUUID="32d43b285464"
Take caution this will be always different from my output, this gonna output your disks and their partitions.
Let's unpack this result a little bit. Each header that contains disk under its type is the identifier for the whole disk let's pick sdb as an example, while sdb are the whole disk identifier.
sdb1, sdb2, sdb3 etc are that disk partitions identifiers.
If you couldn't identify the partition that contains your root file system aka your system "/" yet, then you can look these partitions sizes or lables by inspecting blkid output if you set one up before.
For my case, the fs (root file system) lives under sdb4.
After successfully identifying the path (which gonna be /dev/sdb4 for me) we can now mount it using:
If you saw something under MOUNTPOINTS next to your partition, let's say /media/something make sure to unmount it using
sudo umount -l /media/something. as if you didn't unmount it, you would get target busy when doing the following.
sudo mkdir -p /mnt/broken_system
sudo mount /dev/sdb4 /broken_systemAfter doing so, /mnt/broken_system would be your broken system root directory
Doing ls /mnt/broken_system now should give you a list of your broken system root directory
remohexa@debian:~$ ls /mnt/broken_Linux
1.png boot etc initrd.img lib lost+found mnt proc run srv tmp var vmlinuz.old
bin dev home initrd.img.old lib64 media opt root sbin sys usr vmlinuzcool, we're half the way there!
For the next step, we would need to bind some of our live system devices, such as /dev /proc and /sys since these are runtime paths who're being set by the working kernel to access hardware stuff. USBs, disks etc.
We can do that by running the following commands:
sudo mount -o bind /dev /mnt/broken_Linux/dev
sudo mount -o bind /proc /mnt/broken_Linux/proc
sudo mount -o bind /sys /mnt/broken_Linux/systhe first /dev are your working system dev path and the second is your borken system dev path. by doing
mount bindwe're pointing the broken system /dev to the working system one. So when you chroot into the broken system you would have access to the hardware.
Now we need to get a shell that are working under the broken system, using chroot then the system root directory would do exactly that
sudo chroot /mnt/broken_Linux /bin/bashWe're finally accessing something from the broken system, now the most important step would be figuring out if our boot is separated from the root directory. by looking into the /etc/fstab we could do exactly that.
ALL OF THE FOLLOWING ARE UNDER THE BROKEN Linux SHELL.
cat /etc/fstabif you found something likeUUID=3a352da4027b /boot ext4 defaults 0 2
Then your /boot is separated, if not then its a directory under root.
Easier way to find out would be to just listing the /boot (ls /boot). since we didn't mount it yet, if you found something under /boot e.g. grub, initrd, vmlinuz etc then its not separated.
However, we still need to find out if our installation is UEFI or Legacy. by looking into the previous /etc/fstab result if you found an EFI mount point just like
UUID=3S52-D800 /boot/EFI vfat umask=0077 0 1
then for sure its an UEFI installation if not then its legacy.
by doing cat /etc/fstab we already knew the boot partition UUID right? then we doesn't have to look any further
cat /etc/fstabgave us UUID=3a352da4027b /boot ext4 defaults 0 2
and
blkidgave us /dev/sdb5: LABEL="DebianBoot" UUID="3a352da4027b" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="DebianBoot" PARTUUID="32d43b285464" notice how the UUID didn't change between the two? anyway now we know that our boot partition path is /dev/sdb5
We can just do
mount /dev/sdb5 /bootAnd that's it. we successfully mounted the boot!
First thing to do is to mount the EFI partition under /boot/EFI, blkid would give us a pretty straight forward idea about which path is for that EFI partition.
blkidline from the result: /dev/sdb3: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="3S52-D800" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI" PARTUUID="97e02c48aaa2"
Don't worry though, the LABEL and PARTLABEL are things I had set using gparted before. In your case they shouldn't be there unless you put them somewhere in the past, for now look if you see any EFI word at any entry if not then pretty much it would be the partition who're having "vfat" as its type.
cat /etc/fstabwould do the same. XD :> look which UUID are there for the /boot/EFI mounting point. and then get the corresponding path for that UUID. as shown earlier.
Since we knew that my EFI partition are /dev/sdb3 we can move forward.
After identifying the EFI partition we need to mount it by doing:
mkdir -p /boot/EFI
mount /dev/sdb3 /boot/EFICool, we reached a pretty good point. before heading to the next step let's verify if everything going in the right direction
lsblk
ls /boot
ls /boot/EFI/EFIlsblk should tell you that there's a mount point next to the EFI partition at /boot/EFI, and /boot next to the boot partition if it was separated.
ls /boot should give you something like config-6.1.0-41-amd64 EFI grub initrd.img-6.1.0-41-amd64 System.map-6.1.0-41-amd64 vmlinuz-6.1.0-41-amd64 if its empty (in case they has been wiped, or you recreated the whole partitions) then just make sure everything is alright and you mounted the right boot partition or in case you don't have one then be sure that fstab doesn't say anything about the /boot partition.
ls /boot/EFI should have a folder called EFI, listing that folder may give you some entires like Ubuntu, Debian, Grub, Windows. That's were each system boot entry live for the EFI to function. If they had been wiped or you recreated them. then you souldn't see anything
root@debian:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 20G 0 disk
└─sda1 8:1 0 20G 0 part
sdb 8:16 0 447.1G 0 disk
├─sdb1 8:17 0 16M 0 part
├─sdb2 8:18 0 119.2G 0 part
├─sdb3 8:19 0 1000M 0 part /boot/EFI
├─sdb4 8:20 0 234.2G 0 part /
└─sdb5 8:21 0 953M 0 part /boot
sr0 11:0 1 1024M 0 rom
root@debian:/# ls /boot
config-6.1.0-41-amd64 EFI grub initrd.img-6.1.0-41-amd64 System.map-6.1.0-41-amd64 vmlinuz-6.1.0-41-amd64
root@debian:/# ls /boot/EFI
EFI
root@debian:/# ls /boot/EFI/EFI
Boot debian MicrosoftThen follow up with the next steps under the "Restoring /boot & grub" section.
Make sure you mounted the /boot if it was a separated partition by lsblk and inspect if there's a mount point at /boot next to your boot partition
root@debian:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 20G 0 disk
└─sda1 8:1 0 20G 0 part
sdb 8:16 0 447.1G 0 disk
├─sdb4 8:20 0 234.2G 0 part /
└─sdb5 8:21 0 953M 0 part /boot
sr0 11:0 1 1024M 0 romIf you don't have a separate boot partition, and you didn't install anything besides your broken system. then you shouldn't see any other partition that are bigger than 20MB under your disk identifier.
Now follow up with the next steps under the "Restoring /boot & grub" section.
Now that we made sure everything is alright and aligned as we wanted, we can reinstall the kernel if its missing or just rebuild the boot and grub
For reinstalling the kernel you would have to return to the live system or the working one terminal then mounting the /etc/resolv.conf to the broken system resolv.conf. Just so the broken system can actually resolve your dns queries
For fixing the internet inside the broken system do the following inside the live/working system terminal
sudo mount -o bind /etc/resolv.conf /mnt/broken_Linux/etc/resolv.confReturn to the broken system terminal, or chroot into it. since the following going to be inside of it.
You can either reinstall the current kernel, or install the newest version
apt updateTo reinstall the old one
apt install Linux-image-$(uname -r)
apt install Linux-headers-$(uname -r)To install the newest version
apt install --reinstall Linux-image-amd64 Linux-headers-amd64By doing this, you perform a fresh install of your system kernel.
Now we can finally go to the end of this rabbit hole and rebuild the /boot.
update-initramfs -u -k allThis would add "initrd.img", "vmlinuz" to your boot directory along side with some another files that are required by grub or the boot process. which achieving our point.
Lastly, we should fix grub and call it a day. its as easy as doing:
UEFI:
grub-install --target=x86_64-EFI --EFI-directory=/boot/EFI --bootloader-id=debian
update-grubYou can change the bootloader id to whatever you like, that's the EFI entry which would be shown at the BIOS boot list.
Legacy:
Getting your disk identifier:
root@debian:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 20G 0 disk
└─sda1 8:1 0 20G 0 part
sdb 8:16 0 447.1G 0 disk
├─sdb4 8:20 0 234.2G 0 part /
└─sdb5 8:21 0 953M 0 part /boot
sr0 11:0 1 1024M 0 romRemember when we got the partition identifier? now instead of using a partition identifier which gonna be sdb4 for me. I would need to use the whole disk identifier which would be sdb, its the identifier you would find before any partition on the disk tree and next to it you will find "disk" instead of "part". Another way is to just remove the number from your root file system identifier so instead of sdb4 it'll be sdb.
Then:
grub-install --target=i386-pc /dev/sdb
update-grubAt this point it wouldn't hurt to add a background to grub. Would it?
Copy your background picture to /boot/grub then add this value to your grub config at /etc/default/grub: GRUB_BACKGROUND="/boot/grub/1.png" in addition you can change the GRUB_COLOR_NORMAL to white or black to suit your needs
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_Linux_DEFAULT="quiet"
GRUB_CMDLINE_Linux=""
#GRUB_ENABLE_CRYPTODISK=y
GRUB_BACKGROUND="/boot/grub/1.png"
GRUB_COLOR_NORMAL="black"
GRUB_COLOR_HIGHLIGHT="black"
Finally, just do another update-grub to update the grub with your new background.
You can either unmount all of what we mounted first or go the savage path and just disconnect your disk. whatever you like but would recomment to unmount them first
#(/boot/EFI If UEFI).
#(/boot if you have a separate boot partition).
umount -l /boot/EFI
umount -l /bootLeaving chroot:
exitUnmounting the rootfs
sudo umount -l /mnt/broken_LinuxThat's it! Now your /boot should contain the essential files for booting alongside with grub bootloader if Legacy and an EFI entry inside of your ESP if UEFI.
If you had booted from a live USB then reboot your device and check for your broken system boot option if it was an UEFI installation, for Legacy it should work if you made your installation disk the first boot option from BIOS.
If you hook your disk to another device, disconnect it and put it back inside of the broken device.
Otherwise, if you encountered any issue I didn't address. You can always open an issue or submit push request if you already fixed it.
Luks encrypted file system
DO NOT ENTER YOUR PASSPHRASE IF YOUR LIVE SYSTEM ASKED FOR IT AFTER CONNECTING YOUR DISK, DOING SO WILL MOUNT THE BROKEN SYSTEM AND THUS YOU WOULDN'T BE ABLE TO MOUNT IT.
Just like the previous section, we would need to have some identifiers for the disk who contains your encrypted system and its partitions.
IN THE LIVE SYSTEM TERMINAL
blkid** :**
remohexa@ubuntu:~/Desktop$ blkid
/dev/sda1: UUID="3FB8-D4B0" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="0c48b019-9dc8-43ff-949d-f48860f49520"
/dev/sda2: UUID="5da7785f-2fd8-4797-a5d8-3defd674c9c2" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="ab59d4ed-bba8-4624-a801-caa04a958edc"
/dev/sda3: UUID="78b9d831-850e-4a59-bcc6-c45496e65cdb" TYPE="crypto_LUKS" PARTUUID="19a46c3e-7184-40bc-b712-e8622f2161e6"for now, all we need is that partition who contains crypto_LUKS as its type. The identifier for it is /dev/sda3
After identifying which path is our encrypted root file system, we need to decrypt it.
#(Don't forget to replace /dev/sda3 with your luks fs identifier)
sudo cryptsetup luksOpen /dev/sda3 encpartAfter giving the previous command your luks password it will add encpart to /dev/mapper
remohexa@ubuntu:~/Desktop$ ls /dev/mapper
control encpartThere's two types for Linux luks setup which are “LUKS without LVM” and “LVM inside LUKS”
in case it was “LVM inside LUKS” then you'll find some mapping names other than **/dev/mapper/decpart** under **/dev/mapper**.
e.g.
remohexa@ubuntu:~/Desktop$ ls /dev/mapper/
control debian--vg-root debian--vg-swap_1 decpart
debian--vg-rootis your actual root fs If there wasn't an obvious name like debian--vg-root. Then mount each mapper name then list them until you find the right one. reboot your device or reconnect your disk but make sure to track the new device name under /dev by redoingblkidsince its likely to change from e.g. /dev/sda to /dev/sdd etc.
Quick conformation. By mounting decpart you should have an error that says
remohexa@ubuntu:~/Desktop$ sudo mount /dev/mapper/decpart /mnt
mount: /mnt: unknown filesystem type 'LVM2_member'.
dmesg(1) may have more information after failed mount system call.By that alone, we can confirm that the current setup is indeed "LVM inside LUKS".
If it was an LVM inside LUKS, then make sure to replace encpart with the rootfs mapper name. e.g. **debian--vg-root** in my case.
You can create a new directory as your mounting point by doing
sudo mkdir -p /mnt/broken_systemAnd then mount your root fs using that point
sudo mount /dev/mapper/encpart /mnt/broken_systemOR (LVM inside LUKS)
sudo mount /dev/mapper/debian--vg-root /mnt/broken_systemNow, listing /mnt/broken_system shall give you your encrypted partition root directory list
remohexa@ubuntu:~/Desktop$ ls /mnt/broken_system
bin dev home initrd.img.old lib64 media opt root sbin sys usr vmlinuz
boot etc initrd.img lib lost+found mnt proc run srv tmp var vmlinuz.oldLet's get the broken system mount points
cat /mnt/broken_system/etc/fstabWhich will give us something like
remohexa@ubuntu:~/Desktop$ cat /mnt/broken_system/etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# systemd generates mount units based on this file, see systemd.mount(5).
# Please run 'systemctl daemon-reload' after making changes here.
#
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/sda3_crypt / ext4 errors=remount-ro 0 1
# /boot was on /dev/sda2 during installation
UUID=5da7785f-2fd8-4797-a5d8-3defd674c9c2 /boot ext4 defaults 0 2
# /boot/EFI was on /dev/sda1 during installation
UUID=3FB8-D4B0 /boot/EFI vfat umask=0077 0 1
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0if you didn't find a mount point for the EFI partition, that means you have a legacy setup. skip any EFI related steps.
By comparing fstab insides with blkid output, we would know the current path for that partition alongside with their mounting point under our broken system. Compare the UUIDs and get the current path from the begginning of each blkid output line.
fstab:
UUID=5da7785f-2fd8-4797-a5d8-3defd674c9c2 /boot ext4 defaults 0 2
blkid:
/dev/sda2: UUID="5da7785f-2fd8-4797-a5d8-3defd674c9c2" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="ab59d4ed-bba8-4624-a801-caa04a958edc"
The current path for the broken system /boot is /dev/sda2
fstab:
UUID=3FB8-D4B0 /boot/EFI vfat umask=0077 0 1
blkid:
/dev/sda1: UUID="3FB8-D4B0" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="0c48b019-9dc8-43ff-949d-f48860f49520"
The current path for the EFI partition is /dev/sda1
We still need to get something super important which is the crypttap name for luksOpen
cat /etc/crypttabwhich will give us something like
sda3_crypt UUID=78b9d831-850e-4a59-bcc6-c45496e65cdb none luks,discard
sda3_crypt** is the crypttab name for LUKS, this name is critically important. Since opening luks without that name would drop an initramfs shell during the booting process because it wouldn't be able to find the right partition to decrypt.**
After unpacking fstab, we would need to get rid of the current mount point for the broken system rootfs and lock the disk once again to reopen it using sda3_crypt as its mapper name
sudo umount -l /mnt/broken_system
sudo cryptsetup luksClose encpartIf you got any errors during unmounting or closing luks then disconnect and reconnect your disk. But make sure to check the new paths for your disk since they would likely to change. e.g. when doing
blkidorlsblkyou may find that sda, sda1, sda2 changed to sdd, sdd1, sdd2. If this would confuse you, then restart your device instead of reconnecting your disk since it might not change but make sure if it did though.
Then
sudo cryptsetup luksOpen /dev/sda3 sda3_cryptAGAIN MAKE SURE THAT **
sda3_crypt** IS THE SAME AS YOUR CRYPTTAB NAME, SINCE IT WILL DROP AN INITRAMFS SHELL IF THIS NAME WAS DIFFERENT.
Now doing ls /dev/mapper should give you something like
remohexa@ubuntu:~/Desktop$ ls /dev/mapper
control sda3_cryptOR (LVM inside LUKS)
remohexa@ubuntu:~/Desktop$ ls /dev/mapper/
control debian--vg-root debian--vg-swap_1 sda3_cryptWe would need to mount it somewhere to chroot into it later and fix some stuff
sudo mkdir -p /mnt/broken_system
sudo mount /dev/mapper/sda3_crypt /mnt/broken_systemOR (LVM inside LUKS)
sudo mkdir -p /mnt/broken_system
sudo mount /dev/mapper/debian--vg-root /mnt/broken_systemTry listing /mnt/broken_system, It shall give you your broken system root directory. Now let's bind the working kernel devices to the broken system. so the broken system would be able to access /dev /sys /proc of the current working system.
sudo mount -o bind /dev /mnt/broken_system/dev
sudo mount -o bind /proc /mnt/broken_system/proc
sudo mount -o bind /sys /mnt/broken_system/sysChrooting into the broken system:
sudo chroot /mnt/broken_systemNow we should be inside of the broken system.
Mounting the boot if it was a separate partition (according to fstab).
Since we identified /dev/sda2 to be our boot partition, we would need to mount it as our broken system /boot
mount /dev/sda2 /bootIf your boot isn't a separated then ignore this step.
ALL OF THE FOLLOING INSTRUCTIONS ARE UNDER THE BROKEN SYSTEM TERMINAL UNLESS I SAID OTHERWISE
Mounting the EFI partition:
Since we identified /dev/sda1 to be our EFI partition, then we gonna mount it by doing
mount /dev/sda1 /boot/EFINow that we made sure everything is alright and aligned as we wanted, we can reinstall the kernel if its missing or just rebuild the boot and grub
For reinstalling the kernel you would have to return to the live system terminal then mounting the /etc/resolv.conf to the broken system resolv.conf. Just so the broken system can actually resolve your dns queries
For fixing the internet inside the broken system do the following inside the live/working system terminal
sudo mount -o bind /etc/resolv.conf /mnt/broken_Linux/etc/resolv.confReturn to the broken system terminal, or chroot into it. since the following gonne be inside of it.
You can either reinstall the current kernel, or install the newest version
/boot/vmlinuz-<version> /boot/System.map-<version> /boot/config-<version> if these files are missing inside of your /boot, then the whole kernel is missing. Therefore the installing kernel step are mandatory or else
update-initramfswould give you an error.
apt updateTo reinstall the old one
apt install Linux-image-$(uname -r)
apt install Linux-headers-$(uname -r)To install the newest version
apt install --reinstall Linux-image-amd64 Linux-headers-amd64By doing this, you perform a fresh install of your system kernel.
Now we can finally go to the end of this rabbit hole and rebuild initramfs.
update-initramfs -u -k allThis would add "initrd.img" to your boot directory and reinstalling the kernel would add "vmlinuz" alongside with some another files that are required by grub or the boot process. which achieving our point.
Lastly, we should fix grub and call it a day. its as easy as doing:
UEFI:
grub-install --target=x86_64-EFI --EFI-directory=/boot/EFI --bootloader-id=debian
update-grubYou can change the bootloader id to whatever you like, that's the EFI entry which would be shown at the BIOS boot list.
Legacy:
Getting your disk identifier by using lsblk:
#(Luks without Lvm)
root@debian:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 20G 0 disk
└─sda1 8:1 0 20G 0 part
sdb 8:16 0 447.1G 0 disk
├─sdb4 8:20 0 234.2G 0 part
│ └─sdb4_crypt 252:0 0 19.5G 0 crypt /
└─sdb5 8:21 0 953M 0 part /boot
sr0 11:0 1 1024M 0 rom
--------------------------------------------------------------------
#(Lvm inside Luks)
root@ubuntu:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 20G 0 disk
└─sda1 8:1 0 20G 0 part
sdb 8:0 0 20G 0 disk
├─sdb2 8:2 0 1K 0 part
├─sdb4 8:5 0 19.5G 0 part
│ └─sdb4_crypt 252:0 0 19.5G 0 crypt
│ ├─debian--vg-root 252:1 0 18.5G 0 lvm /
│ └─debian--vg-swap_1 252:2 0 976M 0 lvm
├─sdb5 8:1 0 487M 0 part /boot
sr0 11:0 1 1024M 0 rom
Remember when we got the partition identifier? now instead of using a partition identifier which gonna be /dev/sdb4 for me. I would need to use the whole disk identifier which would be /dev/sdb, its the identifier you would find before any partition on the disk tree and next to it you will find "disk" instead of "part". Another way is to just remove the number from your rootfs identifier so instead of /dev/sdb4 it'll be /dev/sdb.
Then:
grub-install --target=i386-pc /dev/sdb
update-grubAt this point it wouldn't hurt to add a background to grub. Would it?
Copy your background picture to /boot/grub then add this value to your grub config at /etc/default/grub: GRUB_BACKGROUND="/boot/grub/1.png" in addition you can change the GRUB_COLOR_NORMAL to white or black to suit your needs
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_Linux_DEFAULT="quiet"
GRUB_CMDLINE_Linux=""
#GRUB_ENABLE_CRYPTODISK=y
GRUB_BACKGROUND="/boot/grub/1.png"
GRUB_COLOR_NORMAL="black"
GRUB_COLOR_HIGHLIGHT="black"
Finally, just do another update-grub to update the grub with your new background.
You can either unmount all of what we mounted first or go the savage path and just disconnect your disk. whatever you like but would recomment to unmount them first
#(/boot/EFI If UEFI).
#(/boot if you have a separate boot partition).
umount -l /boot/EFI
umount -l /bootLeaving chroot:
exitunmounting and closing luks
sudo umount -l /mnt/broken_Linux
sudo cryptsetup luksClose sdb4_cryptThat's it! Now your /boot should contain the essential files for booting alongside with grub bootloader if Legacy and an EFI entry inside of your ESP if UEFI.
If you had booted from a live USB then reboot your device and check for your broken system boot option if it was an UEFI installation, for Legacy it should work if you made your installation disk the first boot option from BIOS.
If you hook your disk to another device, disconnect it and put it back inside of the broken device.
Otherwise, if you encountered any issue I didn't address. You can always open an issue or submit push request if you already fixed it.
e.g. “No bootable device found”, “Windows blue screen”


















