A Hardware enthusiast view on the usefulness of open source Firmwares like Coreboot

Version 2/1/2020
   by Zir Blazer


Table of Contents




0 - FIRMWARE GENERALITIES and CURRENT STATUS OF AFFAIRS


I believe that no matter how good a Motherboard is from a Hardware standpoint, the Firmware is at least as important as it, since it will eventually define if the Motherboard as a whole unit will be a great one or just mediocre. A Firmware that doesn't fully expose the configurable parameters of the available Hardware is limiting its flexibility, and thus its possible use cases.


As an introduction to Motherboard Firmwares and why the current situation is appalling, here is a small crash course...



Modern Firmwares are extremely complex pieces of Software code. As such, it is not uncommon for some functionality to be bugged or to not be working as intended (Sometimes Firmware updates break things that used to work, too). It can also happen that you figured out that for your use case, you need a critical feature that your Hardware theorically has but is not exposed by the Firmware, making the whole Motherboard as good as a paperweight.

Whenever there is something broken or you want a new feature to be added, the first course of action is to request the Motherboard manufacturer to do something about the matter, as you're paying them for both the physical product and its support. In my personal experience, while for fixes they usually seem to respond decently enough (Is common to hear about Motherboard manufacturers sending custom Firmware versions to solve a specific user issue), they never comply with new feature requests, like adding an option that the Motherboard didn't originally shipped with. If they don't want to be helpful, then you will need a different Motherboard whose Firmware allows you to do what you want, which is a shame if the current one is otherwise totally fine.

The Motherboard manufacturers do not design their Firmwares from scratch. Instead, they license a Firmware core from an IBV (Independent BIOS Vendor), like Aptio V from AMI, then customize it for each Motherboard model. During the customization process, the Motherboard manufacturers limits the feature set of their customized Firmwares based on market segmentation, thus, for example, a typical gaming oriented Motherboard may have tons of options to run the Hardware out-of-spec (Usually for overclocking), yet completely misses niche features like selectively disabling the loading of PCI Option ROMs with PCIe Slot level granularity, advanced in-slot PCIe Bifurcation, some obscure virtualization options, and so on. Instead, these features are often found almost exclusively in Workstation or Server grade Motherboards, even when there is no Hardware based reason for them to not be available in a similar consumer Desktop part. Sadly, Motherboards aimed for other market segments have their own set of drawbacks: A Server Motherboard rarely allows any tweaking, so you can't overclock/undervolt/whatever if using one of those (At least Firmware side. Sometimes it is possible via Software, but most of the tools are for Windows only, so if you're a Linux user, you're out of luck). Then you have budget Motherboards, which are usually devoid of anything but the most basic set of options, or worse yet, those of prebuilt OEM computers, mainly Notebooks, that tends to be completely locked down. Basically, you get only the Firmware options that the Motherboard manufacturer wanted you to.



Here is where BIOS modding communities (The most popular one at the moment is Win-RAID) comes in: There are a wide variety of tools, some that are sourced from the IBVs themselves, that may allow you to extent these propietary Firmwares in some way. For example, in many cases, the Firmware core has coded a lot of options that the Motherboard manufacturer simply flagged as hidden during the customization process, thus using the IBV tools to remove these hidden flags may allow you to easily create a custom Firmware version that unlocks all the menu options that are already present in code but, for some reason, hidden from sight. Doing so is one of the most basic yet useful BIOS mods. Going further, there are many, many more possible mods, with different levels of complexity. For example, Intel released Microcode updates with Meltdown and Spectre mitigations for Processors going as far back as 2006 Conroe (Core 2 Duo), but most Motherboard manufacturers didn't bothered to release Firmware updates with those for ancient Motherboards, yet an user with an old Server in active duty may want to do a BIOS mod to include the latest Microcode for security purposes. Some people also likes to update the embedded Option ROMs (Firmware-level Drivers) to the latest possible version, even when due to changelogs being usually not public, is arguable what issues they solve (I'm always wary of regressions).

At some point, you will figure out that no matter how great the BIOS modding communities can be, the current focus is entirely wrong. There is people that is spending a lot of time, effort and talent attemping to fix the shortcomings of propietary Firmwares, even though that is supposed to be the job of the Motherboard manufacturer. I understand that in the case of old Motherboards, it makes sense for the Motherboard manufacturer to stop giving support because after the end of a Motherboard commercial life cycle doing so is not profitable anymore (The most notorious case involves the previously mentioned Microcode Updates for 2006 Conroe, as it is not realistic to expect that a for-profit Motherboard manufacturer spends resources to release Firmware updates for ancient 10+ years old LGA 775 Motherboards that doesn't make them money), but I find offensive that users have to rely on BIOS modding for current generation ones just because the Motherboard manufacturer decided that you didn't deserved a specific option. Regardless of the context, the propietary nature of these Firmwares makes it impossible for third parties to take over Firmware mainteinance in an orderly way. For those that still try to do so, they're mostly limited to do what the IBV tools allows them to, as anything more complex than what the tools support may require extensive reverse engineering skills just to be able to manually understand and modify the propietary code. That is a situation where an open source Firmware would shine...





Enter Coreboot. Coreboot is an open source Firmware framework that, in theory, can dramatically improved all the previously described scenarios by allowing for easier community-driven mainteinance and the possibility of adding far more features than a propietary Firmware will ever dream of. Being open source, everyone can audit or augment such Firmware. It can be said that with it, the sky is the limit.

The first important thing is to understand how Coreboot actually works. Coreboot takes care only of the very first steps of the platform initialization and the POST process, as afterwards, it passes system control to something that in Coreboot parlance is known as a payload, which is what actually takes care of booting an Operating System. For example, in the traditional sense, people refers to BIOS and UEFI as the whole Firmware. However, both BIOS and UEFI should be better considered as a sort of APIs, as they provide the BIOS Services and the EFI Runtime Services, which the OSes Boot Loaders expects to be available to help them during the boot process. In Coreboot, the BIOS and UEFI interfaces (Mainly required for Windows compatibility) are supported via optional payloads like SeaBIOS and TianoCore, they are not part of Coreboot itself.



During recent times, Coreboot has been critized a lot because it isn't that much of open source anymore. The problem is that in modern Intel and AMD platforms, you are required to use propietary binary blobs to do the platform initialization, and that propietary code does most of the heavy lifting that in the past Coreboot itself used to do. This also applies to Device Firmwares (Option ROMs), too, as they're also propietary in nature. When a Motherboard uses third party controllers like the ASMedia SATA Controllers, which are seen rather often, their Device Firmwares are usually embedded into the Motherboard Firmware itself (Discrete PCI Cards sometimes have an EEPROM chip that contains an Option ROM, but the general procedure that the Motherboard Firmware uses to load them is roughly the same regardless if it is embedded or external), so there can be several vendors providing propietary code for a given Motherboard, not only Intel and AMD. While for the privacy paranoid type of person that wants everything to be open source this is not adequate, for those like me that wants to rebel against Motherboard manufacturers, it is still a viable solution, because you will be able to get under user control all the Hardware functionally that Intel and AMD provides but the Motherboard manufacturers likes to hide, with an easily mainteinable and extendable Firmware scheme. You may still have to bow to your Intel and AMD overlords, but not anymore to the Motherboard manufacturers and IBVs. That is much, much better than nothing.

Assuming that you're willing to use the mandatory propietary binary blobs, the vendors still have to supply them. Intel has been consistently delivering the binary blobs required to initialize its latest Processors and Chipsets as part of the FSP (Firmware Support Package) soon after the release of any new platform. AMD, which previously used to support open source Firmware efforts like Coreboot far more than Intel, didn't even provided the minimum required code and documentation to get Coreboot ported to any Zen based platform, which is why no AM4 Motherboard is currently supported. However, Google has recently been working in a Coreboot port to be deployed in a new Chromebook model that uses a Zen based APU, as they have access to AMD documentation under a NDA contract. AMD has also been giving hints that they may be willing to provide more support to open source Firmwares, albeit that still has to materialize.



Perhaps one of the most notorious issues about Coreboot is that, in practice, actually using it is an unreachable goal for any standard user. Even if all the required documentation to make a Coreboot port for a specific Motherboard was available, this does NOT guarantee that such port will be ever made. Coreboot ports does not materialize out of thin air, there has to be someone that works to make it happen. There is a rather limited amount of individuals that have the required tools and skills to do the ports all by themselves in a timely manner, and they will do the work just for the Motherboards that they're interesed in. You just have to look at the status of Intel platforms to notice the precarious trend: Even when Intel is rather quick to release the required Processor and Chipset FSP binary blobs, there are barely any post-Skylake Motherboards that currently have a Coreboot port. One of the latest that I heared of is Supermicro X11SSH-TF for Kaby Lake, and it happened because it was backed by a cloud provider. There is pretty much nothing else, the roster of supported modern Motherboards is really that small.

This raises an important question: Who are we going to blame for the current lack of Coreboot (Or any other open source Firmware) support? It is due to lack of vendor support, or lack of community interest? Since the tools to make Coreboot ports to Intel platforms are already available, the latter seems to be the correct answer. It is important to consider that even for those shortsighted people that thinks that the open source Firmware matter is either all or nothing (They want a fully open source Intel ME/AMD PSP with no binary blobs like Libreboot does, and don't care about any middle ground progress at all), that is hard to pull a vendor to spend resources supporting something that no one seems to use (At least on the end user side, as some big cloud providers and Google do maintain their own Coreboot ports for their custom Hardware). In the case of AMD, which used to be financially constrained, this is truer than for Intel, as it is harder to lobby AMD to at least provide public documentation and binary blobs when Intel does so and there isn't a plethora of Motherboards with Coreboot or anything that will make AMD believe that they can capitalize on such effort with a return of its investment. Basically, it may be necessary to showcase a successful consumer Motherboard using Coreboot within the current limitations, just to give the open source Firmware community some leverage. Otherwise, you're stuck in a chicken-and-egg scenario.





Even if there was complete support from both Intel and AMD, porting Coreboot to a given Motherboard is actually far harder than what it looks. While is true that the major chips like the Processor and Chipsets are usually well supported (At least on the Intel side), so a port for a well known platform already shares a lot of preexisting working code, there are a multitude of small details that require specific Motherboard support. For example, the Super I/O chips and Embedded Controllers may not be the same between different Motherboards for the same platform, and the way that the GPIO (General Purpose I/O) Pins from the Chipset and Super I/O are wired is always different, something that is dramatically important as there is risk of physical damage if GPIO lines are misconfigured.

Since consumer Motherboards are universally propietary, there are no public schematics that can quickly tell you what goes were. Instead, people that wants to port Coreboot to their propietary Motherboards have to reverse engineering these details, and that takes time. If you want examples about all things that can go wrong in a Coreboot port and the extremes that some developers had to go though to reverse engineer a Motherboard wiring, you can read this presentation (Seriously, look at the x-ray photo in Page 20).



Usually, even if the Coreboot port is functional, is rare to see that it matches all the features supported by the Motherboard original propietary Firmware, simply because not all the features can be implemented with the limited public information. One of the most notorious missing features is overclocking, as you need in-depth knowledge of the Motherboard power delivery circuitry and how to manage it to run outside the standard parameters defined by the official specifications. This is, precisely, the main reason why I believe that in order to make good Coreboot ports, it is necessary for Motherboards to have public schematics, as if they were open Hardware.

There are small sized Hardware designers and manufacturers that align with the open Hardware philosophy: Raptor Engineering is known to ship the POWER based Talos 2 Motherboard schematics in a DVD for its customers, but I don't know if these are publicly available. Sadly, they're not interesed in doing anything for x86. There is at least a single x86 Motherboard manufacturer that actually makes some low level documentation publicly available: SECO/UDOO, which provided some schematics for its embedded Motherboards, like the UDOO X86 (Full schematics), UDOO X86 II (Some GPIO Pinouts), and the SECO COMe-B75-CT6 COM Express Type 6 Carrier Board (There are a few minor schematics on the Manual). That is far better than anything I have ever seen from any of the typical consumer Motherboard manufacturers. None of the big names provides that level of detail to end users. Actually, only recently they began to include Block Diagrams of the platform topology as standard practice just to properly identify if a given PCIe Slot is wired to the Processor PCIe Lanes or the Chipset ones.



Assuming that you were designing a Motherboard from scratch and the goal is that a Coreboot port is easy to do and has a complete feature set, some of your chip choices can be made based on the availability of public documentation, like Super I/O chips (Or BMCs). A Motherboard with public schematics and whose chips have public Data Sheets are simply matched to be with an open source Firmware. Sadly, is important to mention that AMD requires a NDA for anything related to Zen, which is the reason why I believe that SECO/UDOO is omitting the schematics for its Zen based Motherboards like the UDOO Bolt. ASpeed also requires a NDA to get Data Sheets for its BMC like the AST2500 and the new AST2600, yet fortunately they have their own open source Firmware in the form of OpenBMC. Ironically, since my dream is a consumer-oriented mATX or ATX sized Motherboard with an AMD EPYC Embedded coupled with an ASpeed AST2600 BMC (Supermicro already has something similar, but they have no integrated Azalia audio, few USB Ports, and are mITX sized, which means only a single PCIe Slot), it means that the Data Sheets of the two major chips would be completely unavailable. Go figure...



1 - FLASH EEPROM and UEFI NVRAM


The Firmware is physically stored in a Flash EEPROM chip located somewhere in the Motherboard. These type of chips are chosen for Firmware duties because as they require almost no initialization, they are extremely convenient to store the very first piece of code that the Processor is going to execute as soon as it begins operation, a point where there aren't many other possible choices (If any).

Modern Flash EEPROM chips used in PC Motherboards uses a SPI interface, which, depending on the platform, can be wired to either the Super I/O, the Chipset, or the Processor itself (In AMD Zen based platforms, the Processor has a builtin SPI Controller, so it can directly interface with the SPI Flash EEPROM containing the Firmware). A Motherboard may have more than one Flash EEPROM besides the main one with the Motherboard Firmware, as some Devices like the BMC or even a few NICs can have their own exclusive Flash EEPROM, too. Some Motherboards (The examples I recall reading about are all Notebooks) splits the main Motherboard Firmware image in two Flash EEPROMs because it seems to be cheaper for manufacturers to buy a 8 MiB + 4 MiB Flash EEPROMs than go straight for a 16 MiB part.



The main Flash EEPROM chip does not only contain the Firmware, but also the UEFI NVRAM (Non Volatile RAM). Before UEFI, BIOS type Firmwares had a comparatively small amount of configurable settings that could entirely fit inside the RTC (Real Time Clock) SRAM, that thanks to being battery backed, effectively worked as a sort of NVRAM. The Clear CMOS procedure fully deleted any settings stored in the volatile RTC SRAM by removing the RTC battery power source, thus the Firmware always reverted to factory defaults as if it was the first time powering on. Modern UEFI type Firmwares extended the NVRAM concept by also using the Flash EEPROM chip itself to store some settings, namely the UEFI Boot Parameters and the Public Key Database.

Changing stuff stored in the UEFI NVRAM implies silently flashing the Firmware Flash EEPROM chip, albeit only a small partition specifically reserved for user data. However, it is possible to brick a Motherboard if the UEFI NVRAM has corrupted data and the Firmware doesn't know how to handle it. Because the UEFI NVRAM is completely non volatile in nature, you can't easily wipe it like the RTC SRAM. It is extremely hard to recover from these bricking scenarios, as if you have an unbooteable system, you can't use standard Software tools to clear the NVRAM, which means that you need to rely on something else. Moreover, the Flash EEPROM chips have rather low endurance as they are not intended to be writed to frequently, so they are a possible source of long term reliability issues as any Software that abuses writing to the UEFI NVRAM can cause the Flash EEPROM to become flaky.



Depending on the Motherboard, there are several available options for an end user to perform Firmware recovery. Some Motherboards have a Jumper that tells it to read a secondary Boot Block found in the main Firmware chip at an alternate address location, in case that the primary one got corrupted (This may not be helpful to recover from an UEFI NVRAM bricking). Other Motherboards have two Firmware chips and allows to select the backup one via a Jumper (Or switch automatically if an Embedded Controller detects that the main one fails to POST). More advanced solutions includes Motherboards that have some type of Embedded Controller that has a higher degree of autonomy than usual, being able to operate with a bricked Motherboard Firmware or even with no Processor installed. For example, the BMCs of Server Motherboards are usually fully autonomous (They have their own auxiliary Processor, RAM and Flash EEPROM), and if the main Firmware Flash EEPROM chip is located in a shared SPI Bus, the BMC has direct access to it and can be instructed to flash a new Firmware image via remote means. A few modern Motherboards have an advanced Super I/O chip (Or separate discrete Embedded Controller? Not entirely sure) that is similar to BMCs in that they are fully autonomous, but loads a new Firmware image from a local USB Flash Drive instead of doing so remotely. The last two methods are very powerful solutions since they are useful for socketed platforms where you may have to flash a new Firmware version before being able to pass POST with a new Processor, which is a major problem if you just purchased a Motherboard and Processor combo where the Motherboard has an older Firmware that can't POST with it (Actually, AMD distributes what it calls a Boot Kit for Socket AM4 Motherboards that have no autonomy, as with old Firmwares you can't boot with new Processors installed. AMD loans you a cheap first generation Processor so that you can boot, flash a new Firmware via Software means, then install a new Processor. This is unnecesary for Motherboards with autonomous Embedded Controllers, making it a major feature).

There is yet another option: Using an external EEPROM reprogrammer like a CH341A, so that you can do whatever you want with the Flash EEPROM chip. A major issue is that in almost every case, the SPI Flash EEPROM chips are soldered to the Motherboard. While it is possible to use clips to hook a soldered chip to the reprogrammer like via a SOIC8 clip adapter, those are less user friendly than if going the proper route, which is using a socketed chip version. Basic CH341A reprogrammers cost like 5 U$D or so, and even these are enough to get the job done. It may even be recomendable to include such reprogrammer in a boxed retail Motherboard package that is intended to be used with Coreboot, as it is a cheap yet extremely useful tool for people that likes to tinker with the Firmware (I can bet that toying with Coreboot will result in bricking the Motherboard rather often. And simultaneously bricking the BMC Firmware and the main one is still a possibility). In general, I think that just using a single socketed Flash EEPROM chip can suffice if rescue operations are handled either by an autonomous BMC/EC, or with the help of an external EEPROM reprogrammer. Note that the BMC also requires its own exclusive SPI Flash EEPROM.





The size of the Flash EEPROM is worth some consideration. The usual sizes are either 16 or 32 MiB (128 and 256 MBits, respectively). 16 MiB has already proven to be too small, as many older AMD AM4 Motherboards with 16 MiB chips that added support for the latest Ryzen 3000 series had to remove features or support for older Processors just to make room for the AMD AGESA code required by the new ones. 32 MiB is adequated for Firmware uses, but would still not be big enough for huge payloads like an embedded OS. Chips bigger in size than 32 MiB exists, albeit I'm aware that in order to use these chips the platform needs special SPI 4-Byte Address Mode support, which, due to lack of public documentation, I have no idea whenever current AMD platforms supports or not (Since that mode is supposed to be for sizes starting with 32 MiB, and 32 MiB sizes are found in the wild in AM4 Motherboards, chances are than Zen integrated SPI Controller does support 4-Byte Address Mode). In general, the bigger Flash EEPROM chips aren't much more expensive compared to their smaller parts. For reference, in Winbond eStore, a 32 MiB sized Serial NOR Flash like the W25Q256JVEIM cost 2.8 U$D, whereas the bigger 64 MiB W25Q512JVEIM cost around 5 U$D each (Unavailable directly from Winbond eStore, but available in other resellers).

While bigger SPI Flash EEPROMs may be nice, from a strict GiB/U$D perspective they're obscenely expensive. Moreover, in many cases it may be preferable to use a different type of internal memory that is completely independent from the one that contains the Motherboard Firmware. For example, Zen has an integrated eMMC/SD Controller, which means that it should be possible to have a soldered eMMC chip for storage, or better yet, an internal SD Card Slot (Or two from the same controller) so that you can pick any SD Card size with the added bonus that it is removable. These could make big SPI Flash EEPROMs chips redundant, as the Firmware could do basic platform initialization then load the bulk of data, like an embedded OS, from a SD Card. Maybe the SD Card could even be used for the UEFI NVRAM, leaving just static code in the main Flash EEPROM, as to not waste its endurance with writes.

Using internal SD Cards Slots is uncommon but possible. Some Server systems have an internal SD Card Slot to use as Boot Diskses, usually as replacement for SATA DOMs (Disk On Module), and there are also some Servers that even have two SD Card Slots to use RAID 1 for redundancy, like the HP Proliant ML110 G7. Note that not everyone likes internal SD Card Slots, perhaps the best justification being because they are in a rather inaccessible location. Yet another alternative to a SD Card Slot could be an internal USB Port for an USB Flash Drive, also done in some Servers. However, Motherboards usually expose all USB Ports as external (Either in the back I/O panel or as headers for the Front Panel), so to have an internal Port would be to lose an external Port. In SoCs that have an eMMC/SD Controller, it usually goes unused in Desktop form factors, so assuming that there is enough free Motherboard real estate, why not put it to use?



2 - COREBOOT MINIMUM PAYLOADS


These are the four most common Coreboot payloads, with TianoCore being the most important one:


   Linux Kernel

Coreboot can boot the Linux Kernel directly as an actual payload, limiting itself to initialize only the bare minimum Hardware, then letting the Linux Kernel to do the rest (This is the reason why Coreboot was initially named LinuxBIOS. Not to be confused with the current LinuxBIOS). From a time-to-market perspective, this should be the quickest one to get working. Users that relies on virtualization to use Windows inside a QEMU VM in a Linux host like I do (Ever hear about using PCI Passthrough with a Video Card to make a Windows gaming VM?) could adapt easily without missing any functionality, since it should be pretty much the same general experience. However, this would never suit mainstream users that are used to bare metal Windows.



   TianoCore (UEFI)

TianoCore serves as a pure UEFI implementation (UEFI Class 3). Its main use is to do bare metal booting of 64 Bits Windows versions that are installed on a GPT formatted disk, a combination that I like to call UEFI-GPT. As such, having a TianoCore payload should be the bare minimum requeriment for a Motherboard with Coreboot aimed at mainstream consumers. TianoCore should also be able to directly load Device Firmwares from discrete PCIe Cards whose Option ROMs include an UEFI compatible image with the standard UEFI header as defined by the UEFI Specification. I'm not sure if implementing the UEFI NVRAM is mandatory (So that it has a place to save the UEFI Boot Parameters and the Public Key Database), if it can be stored in arbitrary media, like as a file located in a GPT formatted disk ESP (EFI System Partition) partition, or if instead, it can be fully volatile without loss of functionality (By always loading by default \efi\boot\bootx64.efi from the ESP on first disk drive. Or at least, to store a single disk drive or file path in the RTC SRAM).



   SeaBIOS (BIOS)

SeaBIOS provides a pure BIOS implementation. It is mostly used to support legacy OSes and PCIe Cards with Option ROMs that have only a BIOS compatible Device Firmware. All 32 Bits Windows versions requires SeaBIOS to work as they need a BIOS interface, plus a MBR formatted disk, a combination that I like to call BIOS-MBR. It is also used to boot 64 Bits Windows versions that were installed on MBR formatted disks instead of the modern GPT (64 Bits Windows versions allows for either BIOS-MBR or UEFI-GPT, no other combination allowed. Some Boot Loaders used by Linux allows for BIOS-GPT).



   TianoCore + SeaBIOS as CSM

This combination should in theory be functionally equivalent to modern consumer Motherboards, since pretty much all their Firmwares universally supports both UEFI and BIOS interfaces via a CSM, covering all use cases. However, as far that I know, SeaBIOS as CSM is lightly tested because for both bare metal and virtualization use cases, developers and users prefers to use the pure Firmware forms, so it is not a stress tested code path. I'm not sure how solid they are when used together.



3 - COREBOOT SETUP MENU


One major drawback of Coreboot, is that it doesn't have any sort of builtin configuration tool like consumer Firmwares have, where you can press the Del key during POST to get into the Firmware setup. In Coreboot, nearly everything is expected to be configured at compile time. The only available menu is usually provided by the payload, so that you can choose what you want to boot. This is completely inadequated for the masses, as they would want something that has the look and feel that they're already used to (Trivia: The original 1981 IBM PC 5150 and its successors, the IBM PC/XT, PC/AT and PS/2 series, didn't have a builtin BIOS Setup application, either. While the RTC and its battery backed SRAM were introduced in the IBM PC/AT, the BIOS Setup wasn't embedded into the Motherboard ROM, instead, it used a booteable configuration diskette to modify the settings stored in the RTC SRAM. Third party Firmware developers like Phoenix Technologies began the practice of including a builtin BIOS Setup in ROM, beginning with BIOSes for IBM PC/AT clones).



Developing at least a basic builtin configuration tool (Can be text based, it doesn't has to be a modern one with fancy GUIs) will be pretty much required for any attempt to make Coreboot usable by standard users, as I doubt that lots of people will even dare to install the compiler toolchain required just to change basic settings. Additionally, an alternative toolchain that is usable from within Windows may be necessary (I think that this can be done via Cygwin), so that Windows-only users can toy around with Coreboot code. Otherwise, Coreboot will remain just in the realm of Linux power users that have programming skills.

Regardless, implementing runtime modificable settings will also require major modifications to the way that Coreboot works, since instead of relying exclusively on hardcoded settings, the platform initialization code will have to check some form of memory (The RTC SRAM is already too small. As explained before, in most UEFI Firmwares, some part of the Flash EEPROM is used as user NVRAM, so changing settings may actually involve silently flashing a part of the Flash EEPROM chip) to see what the user wants and initialize the Hardware accordingly.



4 - OPEN SOURCE FIRMWARE USE CASES


This is a rather detailed list about all the use cases where I believe that an open source Firmware can completely obliterate standard propietary Firmware, including settings often missing from consumer Motherboard Firmwares and other miscellaneous stuff.



4.1 - Freely downgrade to a previous Firmware version

Modern Firmwares are so prone to bugs, that it is often the case where upgrading to the latest Firmware version may break functionality that previously used to work fine (Keep in mind that regressions also applies to open source projects, including Coreboot, but at least you can audit what specific commit broke something). Firmware regression issues should be easy to workaround if you could roll back to the previous known-good Firmware version while waiting for a fix, and that is precisely where the issue lies at: In many cases, you can't freely downgrade the Firmware, as there are several Motherboards whose Firmware tools does not allow you to flash an older Firmware version than the one currently installed. This means that if you flashed a new Firmware version and figured out that it broke something, you have to live with the broken Firmware until the vendor pushes a fix. Alternatively, you can also attempt to force flash an older version, with a dramatically increased chance of bricking the Motherboard. External reprogrammers can overcome any Software imposed limitations, but there are other issues related to that (Mainly the preservation of unique Motherboard data found in the Firmware, which requires manual work, and extra tools required because the Flash EEPROMs are usually soldered).


Firmware regression issues have happened rather often in AMD AM4 platforms, as it has a great deal of added complexity due to AMD supporting multiple Processor generations at once. For a specific example, some AMD AGESA versions beginning with ComboPI 0.0.7.2 silently broke PCI Passthrough, and because you were being unable to flash a previous Firmware version, you could get stuck with a broken system. As the users of PCI Passthrough is a small crowd, that issue should have been low on the priority list of things that have AMD had to get fixed. Though AMD recently fixed it with the 1.0.0.4 AGESA version, between the discovery of the issue, having AMD fixing it then releasing the new AMD AGESA code to Motherboard manufacturers, and finally having to wait for them to perform final validation before releasing an end user ready Firmware update, a bit more than half a year had already gone by. Thankfully, there was a workaround available involving patching the Linux Kernel, else...



4.2 - Better control over hard to troubleshoot performance issues

A bad Firmware can severely affect the performance of the computer in not very subtle ways, yet depending on the context, it can be extremely hard to pinpoint that the Firmware itself is the culprit. Slower POST times, high DPC Latency (Which usually manifest as random stuttering or audio glitches) and other minor issues can be Firmware related, with no solution from the OS or Driver side. Typically, users that are able to identify that a specific Firmware version is the cause of these issues do so because they flashed a new Firmware version then inmediately noticed the regressions, then flashed back an older Firmware version which fixed the issues.


At the very least, an open source Firmware would allow developers and savvy users to track any changes between versions until they find the exact cause of the regression (After all, Hardware enthusiast absolutely loves to beta test whatever latest Driver or Firmware version is available). If you couple this with the previous issue that you aren't always able to flash back an older Firmware version, you just left open the possibility for a deadend scenario...



4.3 - Better control over which Option ROMs are loaded and which aren't

PCI Option ROMs contains Firmware-level Drivers. The Firmware can load them by means standarized in the PCI specification so that it can have basic support for an otherwise unknow PCI Device, which for some reason has to be functional at POST time. Option ROMs are physically located as the contents of a Flash EEPROM chip in a discrete PCI/PCIe Card, or embedded into the Motherboard Firmware Flash EEPROM itself if it is a Device integrated to that Motherboard, as to save the cost of having an independent Flash EEPROM chip per builtin Device. As Flash EEPROMs are writeable, some Device manufacturers delivers Option ROM updates for discrete PCIe Cards that are similar in nature to Motherboard Firmware updates, whereas Motherboard Firmware updates usually bundle with them updated Option ROMs for the integrated Devices, too. An Option ROM may contain one or more Firmware images identificable by standarized headers so that the Firmware can known which specific image is meant for it to load. Typically, all the Option ROMs contain either a BIOS-only Firmware image, an UEFI-only image, or both, so that they can be loaded by either BIOS/CSM or UEFI type Firmwares.

A whole bunch of Device types uses Option ROMs because a part of their usefulness relies on being available before booting an OS. A prominent example are the cheap SATA Controllers that comes as discrete PCIe Cards: Some are considered "non-booteable" and are only usable by the OS after loading its Drivers, making them useful just for adding more storage drives. Others, instead, have a Flash EEPROM in the card itself with an Option ROM, which allows the Firmware to be able to operate that SATA Controller right after POST thus making disk connected to it booteable. Network adapters also have Option ROMs so that they can use iPXE for remote booting. But, the most interesing Device class with Option ROMs are Video Cards, as all them have an Option ROM so that the Motherboard Firmware can load the Video Card VBIOS or UEFI GOP (Formal names for the BIOS and UEFI images in the specific case of Video Cards) for the computer to have a working basic video output before the OS Drivers loads. For Notebooks with discrete GPUs or any Motherboard that supports integrated GPUs, the Option ROM for the Video Card is embedded as part of the Motherboard Firmware itself. Note that if you're running in UEFI-only mode (CSM disabled) and there is a Device that only has a BIOS image (Or viceversa in old Motherboards without UEFI support), that Device will not be available at POST time, only after booting an OS, but I think that you should have figured out that by now.


The standard procedure for the Firmware is to always load all the Option ROMs available, as it makes sense that if you plugged in a card, is because you want to use it. However, in many scenarios, this approach is completely inappropriated, as at times there are good reasons for you to NOT want to load a specific Option ROM. Other times, it may be possible that the Firmware doesn't pick the correct Option ROM image type (For example, you have CSM enabled for Windows 7 compatibility yet want to load the UEFI Firmware image of a particular Option ROM and not the BIOS one). Some Server and Workstation grade Motherboards may allow you full control of both scenarios with a PCIe Slot level granularity and even for integrated Option ROMs, like the Supermicro X10SLH-F and ASUS Z9PR-D12/4L Motherboards (Note that besides "Disabled", "UEFI only" and "BIOS only", there are some Firmwares that also allows for priority, so that it defaults to loading UEFI or BIOS first, then the other type if the first priority was not found). Sadly, this function is absolutely absent from consumer grade Motherboards and I didn't found anyone that ever attemped to add similar options via BIOS modding.


There are a few niche scenarios where being able to skip loading an Option ROM is useful. One of them is for compatibility purposes: Some Firmwares, for some reason, have troubles when the Option ROM contains multiple Device Firmware images. There is a very recent case of a Sandy Bridge era Motherboard where a Video Card that had both a VBIOS and UEFI GOP made the Motherboard Firmware to hang during POST, yet removing the UEFI GOP image from the Video Card Option ROM made it to finally work (Note than that guy was using PCIe Hotplug to physically plug the Video Card AFTER the computer POSTed and found that it worked that way, pointing out that the problem was specifically Firmware related). A whole bunch of compatibility issues where installing an old PCI Card in a Motherboard doesn't allow it to pass POST may be simply resolved by not loading its Option ROM (Before installing the card, you configure the Firmware telling it to not load the Option ROM of a specific PCIe Slot, shut down the computer, install the card in that slot, then turn it back on), letting the OS Drivers to initialize it instead.

The other important scenario involves PCI Passthrough. When there are multiple Video Cards installed, the Firmware may automatically decide which one it wants to use as primary video output and not give you enough options to select another card. As the Firmware and the OS can choose two entirely different display outputs, it may happen that the Firmware appears in a Monitor (Or even in the same Monitor, but in another video input that you have to select manually), then when the OS boots, the working screen moves to another one. Some people workarounded it by enabling CSM, as it seems that the Firmware uses a different order to pick the main Video Card, but it doesn't guarantees you that it is the one you want and you lose features like Fast Boot. Regardless, the thing is that if you can disable the loading of the Option ROMs found in cards in a specific slot, you could turn off all the other Option ROMs but the one of the card that you want to use for the virtualization host. It is also convenient because some Video Cards have strange issues if they are initialized by the host (Either by the Firmware via Option ROM or the OS GPU Drivers) then passthroughed to a VM, as the initialization procedure expect the GPU to be in a fresh state, thus not letting the Firmware to load its Option ROM and blacklisting the host Drivers may allow it to reach the VM in a completely pristine state.


Finally, disabling the loading of any external Option ROMs can be a major security feature in the years to come. The USB 4 specification absorbed features and compatibility with Intel Thunderbolt 3, which has a mode where it works like a sort of external PCI Bus. Thunderbolt 3 Devices, adapters and cables can have Option ROMs in them, functionaly identical to what was already described. You can bet that there will be someone that attempts to embed malware in them, as it is easier to expect an user to plug a cheap cable than to open the computer case to add a new PCIe Card. And you don't want malware that is already present from before the OS boots...



4.4 - Supporting new Devices via embedded modules or Option ROMs

Back when the NVMe specification for PCIe SSDs was new, it was not possible to boot from such SSDs due to the complete lack of Firmware support for NVMe. With no Firmware support, these SSDs were useful only as data storage drives, accessible from within an OS with proper NVMe support. The exception were a few NVMe SSDs that had an Option ROM with a NVMe Driver, which the Firmware could load at POST time, getting them working even in old Motherboards. In truth, natively booting from a NVMe SSD was never impossible for older Firmwares, as it was later proven that you could embed a NVMe UEFI Driver in these and get everything working. But, as you know, big corporations needs new artificial features to sell new products...


During 2014, Intel was supposed to release for the desktop market the new Broadwell based Processors, Broadwell being the 14nm Haswell shrink. However, for several reasons, the launch was massively delayed, so much that Broadwell Processors came more than a year later, just months before Skylake, when it didn't really made sense to buy one anymore (And lets not forget that desktop Broadwell Processors dropped compatibility with first generation Haswell Chipsets, albeit even that is highly debatable since the Xeon E3 1200v4 line is compatible with the Haswell era C226 Chipset). As a stopgap measure, Intel decided to release a few new Haswell models, known as Haswell Refresh (Including the Core i5/i7 4690K/4790K Devil's Canyon, that were still Haswell but with a better custom package), along with the H97 and Z97 Chipsets that were originally intended to be for Broadwell.

One of the most touted features of Motherboards using these Chipsets (Because they had to find enough justifications for the new Chipsets without supporting meaningful new Processors) was support for native NVMe Boot, so that you could boot from a PCIe NVMe SSDs like the Intel 750 SSD series in discrete PCIe Card format, or one of the new M.2 PCIe ones. The problem? That was not a Chipset feature at all. Actually, the Chipset did NOTHING. Is just that as part of the whole platform package, Firmwares began to include a NVMe Driver so that they could natively boot from these new NVMe SSDs, but that could otherwise be considered a standalone Firmware feature. Obviously, no Motherboard manufacturer bothered to update older Motherboard Firmwares to support NVMe Boot, albeit some older Motherboards somehow supported it (Actually, Intel acknowledge the ASUS Z87-Expert and Intel DH77DF as supporting NVMe Boot in its Booting from an NVMe PCIe Intel Solid-State Drive Technological Brief. It seems that Intel never said that you required the new Chipsets for NVMe Boot)


As you can guess, BIOS modding came to the rescue. The modular nature of UEFI Firmwares made such mod relatively easy, and similar procedures are performed to update other modules and embedded Option ROMs.



4.5 - Supporting new Processors and Microcode updates

Typically, Processor support depends on Microcode updates. At some point in time, Processors mostly worked out-of-the-box even with poor Firmware support, while the Microcode updates would merely fix bugs/erratas (For reference, in 2005, the first Athlon 64 X2s could boot an OS and be usable without proper Firmware support, albeit you only saw one Core. That was still enough to flash an updated Firmware). However, these days, it is often the case that the Processor doesn't even pass POST if it isn't provided with a Microcode update early on, thus they are critical.

At times, a Motherboard manufacturer may decide to not provide Firmware updates with a newer Microcode version that would allow a Motherboard to use new Processors that the platform is actually meant to support. One of the best recent examples is Intel last Motherboard series for Haswell, which didn't supported Haswell Refresh and Devil's Canyon when absolutely everyone else did, simply because Intel decided to screw its customers. Worry not, BIOS modding can also deal with these issues.


Other scenarios where Microcode updates may help includes to support Processors that a Motherboard may have never been intended to support. For example, when the LGA 771 to LGA 775 mod became popular so that you could use the cheaper and better binned LGA 771 Xeons Processors in consumer LGA 775 Motherboards, you were often required to do a BIOS mod to manually add the Microcode for those into your Firmware, which makes sense if you consider that because the LGA 775 Motherboards were never intended to be used with these Processors, they had no reason to include their Microcode embedded in the Firmware. Something similar happens with people that wants to use Engineering Sample Processors, as sometimes only early Firmware versions includes Microcode for those, as it gets removed later.

Yet another reason for messing with Microcode is because some people may want to intentionally run older Microcode versions due to feature removals in later ones, like when Intel removed TSX support for Haswell Processors due to a major errata via a Microcode update, or when it removed the capability of non-K Skylake Processors to do BCLK overclocking also via a Microcode update, or for those users that consider that Spectre and Meltdown mitigations aren't worth the performance hit.


Is important to note that in recent Intel Processor generations, the Intel ME (Management Engine) is also involved in Processor support because it has to make sure that you're using a valid Processor and Chipset combination, as otherwise it can refuse to POST (This is perhaps THE reason why you will never see it getting open sourced, it is there to make sure that you don't get features that you didn't paid for). For example, since Skylake generation, if you want to use a LGA 1151 Xeon E3 1200v5/v6 series, you're supposed to have a Motherboard with an Intel C series Server/Workstation Chipset as otherwise they will fail to POST, whereas the previous Xeons E3 generations worked flawlessly in normal consumer Desktop Motherboards. They still can work, if you remove most of the Intel ME code with some BIOS modding, but don't expect Intel to help open source efforts if you tell them that you want to make easy to do all the things that you aren't supposed to...



4.6 - Advanced PCIe Bifurcation options

The PCI Express Bus is based on a Point-to-Point topology. A PCIe Port is linked exclusively to just one PCIe Device, unlike the older PCI, where you could have multiple PCI Devices coexisting and sharing the same parallel PCI Bus. Modern Processors have one or multiple integrated PCIe Controllers with up to 16 Lanes each. Because using so many Lanes for a single Port is at times a complete overkill, the PCIe Controllers can optionally support bifurcation, which allows to partition these Lanes as two or more Ports so that you can install more PCIe Devices. Depending on the platform, each 16x PCIe Controller may be configured in different modes, the most common ones being 8x/8x (In both Intel and AMD consumer platforms), 8x/4x/4x (In Intel consumer platforms such as LGA 1155/1150/1151) and 4x/4x/4x/4x (In Intel HEDT Core i7/i9 and Xeons E5, and AMD ThreadRipper, EPYC, and perhaps unofficially, Ryzen). Note that in both Intel and AMD consumer platforms, you are supposed to only be able to bifurcate if you're using a specific Chipset (This enters in a category that at least Intel calls "Chipset controlled Processor features", as while all the Processors supports it, you're not supposed to make use of the feature if you didn't paid for a premium Chipset).

The way that you typically see PCIe bifurcation in action is when a Motherboard has two or more PCIe Slots (Typically 16x sized) wired to the same Processor PCIe Controller. For example, is very common to see Intel LGA 1151 Z370/Z390 Motherboards with either two or three 16x PCIe Slots that work in 16x/0x/0x, 8x/8x/0x, or 8x/4x/4x modes (You can physically insert a PCIe Card with a 16x connector in the second and third slots, but it will get the bandwidth corresponding to only 8 or 4 PCIe Lanes. Depending on your use case, you may not even lose performance). Between the PCIe Controller and the PCIe Slots, there are chips that work as physical PCIe Switches, whose purpose is to reroute the PCIe signals from the wires going to one Slot to another one. For example, in a Motherboard with a single 16x PCIe Slot, everything is easy because the Processor PCIe Controller and the PCIe Slot are directly wired, but in a Motherboard with 2 or 3 PCIe Slots, you will always see that 8 Lanes are hardwired to the first PCIe Slot, whereas the other 8 are wired to some passive PCIe Switches (Like the Texas Instruments HD3SS3415) that can physically route these Lanes to either the first PCIe Slot, to make for a full 16x link, or to the second one, leaving both slots as 8x/8x (The other 8 lanes of the second slot are physically not connected, thus it can never be used as 16x). If there is a third PCIe Slot, then 4 of the 8 lanes that already go though a first layer of passive PCIe Switches will go though yet another set of PCIe Switches, so that they can be rerouted either to the second PCIe Slot to leave it at 8x, or to the third slot, halving it again to make 4x/4x (12 Lanes of the third slot are not connected). Finally, the PCIe Slots have a pair of Pins known as PRSNT1 and PRSNT2 that are used for out-of-band Presence Detect, so that the Motherboard can check if there is an installed PCIe Card before initializing the PCIe Controller. The Firmware is supposed to check for these Pins so that if it detects that there is a PCIe Card installed in a specific Slot, it can automatically configure both the PCIe Controller to bifurcate accordingly along with the passive PCIe Switches that are in the way, to route the lanes to that Slot. Thus, if you install a PCIe Card in the third slot, even if there is nothing in the first two slots, it will automatically bifurcate to 8x/4x/4x (A more flexibile arrangement where you can physically route all the 16 lanes to any of the 3 slots so that a single card could work at 16x in any of them would be far more complex and expensive. For those that try to do so, you will see full fledge PCI Switches that are a formal part of the PCIe fabric like PLX PEX series instead of the passive ones that merely reroutes signals).


In recent times, several PCIe Cards have appeared that makes uses of the same bifurcation function of the Processor PCIe Controller (The Chipset can also have bifurcable Ports, but they're less interesing as you have less Lanes to work with, and for any intensive use you will be bottlenecked by the uplink to the Processor anyways) but in a new style: In-slot bifurcation. Basically, you have the lanes physically wired to the same Slot, but they are considered part of separated Ports. For example, you can have all 16 Lanes wired to a single 16x PCIe Slot, with the Processor bifurcating them as 4x/4x/4x/4x, as if they were going to separate PCIe Cards. This allows for niche adapters like the AsRock ULTRA QUAD M.2 CARD, which in a single PCIe 16x Slot fits four M.2 PCIe 4x NVMe SSDs (Each one is considered a single PCIe Device). Other similar adapters could provide instead of 4 M.2 Slots, 4 U.2 Ports or 4 OCuLink Ports, as they can carry 4 PCIe Lanes each.

An extremely niche usage for in-slot bifurcation is when coupled with PCIe Risers (8x/8x, 8x/4x/4x, and so on). These are usually sought after by mITX users so that they can plug more PCIe Devices into their small Motherboards, as they only have a single PCIe Slot to work with (May as well have gone mATX instead?). Note that, when there are multiple PCIe Devices in a single PCIe Slot, the PCIe Slot Presence Detect Pin does not work as intended since it just can be used to tell if there is a PCIe Card installed, but not that the card has more than a single standalone PCIe Device. Thus, in-slot bifurcation always has to be manually configured.


So, what is the problem regarding PCIe bifurcation? A lot of Motherboard Firmwares doesn't allow you to manually configure bifurcation at all. Ironically, in many cases, the bifurcation is actually supported by the Firmware, but the option to select bifurcation mode is hidden by default. At the end of the day, BIOS modding comes to the rescue, showing that it is possible to get working bifurcation.

A good Firmware should allow to independently configure bifurcation and the passive PCIe Switches that reroutes the Lanes, as you may have a Motherboard with multiple PCIe Slots coming from the same PCIe Controller, with an use case where you want to do in-slot bifurcation to 8x/8x or 4x/4x/4x/4x without rerouting the lanes to other PCIe Slots because you have nothing installed on them. Or maybe you want to have two physical 16x Slots with just 8 Lanes each, then simultaneously do in-slot bifurcation to 4x/4x for both, as if you wanted to use two PCIe 8x-to-2 M.2 adapters like these instead of the previously shown PCIe 16x-to-4 M.2 one.



4.7 - IOMMU virtualization support

The IOMMU is a Hardware component that is critical for virtualization users that uses PCI Passthrough, and has also recently become usable by mainstream Windows users as an optional security feature that restricts misbehaving Devices from performing DMA to memory that doesn't belong to them.


Back in 2012-2013, when the IOMMU (Also known as Intel VT-d/AMD-Vi) was new in both Intel LGA 1151 Sandy Bridge and AMD AM3+ Bulldozer/FM2 Trinity platforms, it was extremely hard to get a Motherboard that had it in a working condition. There was a lot of confusion regarding which Hardware actually supported the IOMMU, but theorically, from these generations onwards the lone Processor sufficed as the IOMMU had become integrated into it, whereas in previous generations the IOMMU used to be a Chipset feature (I recall having argued in XtremeSystems with an ASUS community manager that was adamant that they didn't included support for IOMMU in Intel Motherboards using non-Business Chipsets because "according to Intel Ark these Chipsets don't support VT-d", even when other Motherboard manufacturers managed to actually get the damn thing working in any Chipset).

Even if the Hardware itself supported it, the Firmware may not actually include a simple On/Off toggle option to actually enable it. Early PCI Passthrough adopters that wanted to use non tested Hardware usually had to read the Motherboard Manual just to check if there was a photo or something that could prove that the Firmware had such option. Yet, EVEN if there was an option to enable the IOMMU, the Firmware may not correctly generate the required ACPI Tables DMAR (For Intel) or IVRS (For AMD), leaving it unusable. Some Firmware updates could fix these, others borked it and forced you to downgrade to a previous version. Motherboard manufacturers typically provided no support and didn't bothered to fix IOMMU related issues, so if you already had the Hardware and figured out that you wanted to try PCI Passthrough in a Motherboard with a broken Firmware, chances are that you were out of luck. As a workaround, both the Linux Kernel and Xen added support for optional boot parameters that allowed to attempt to fix missing or incomplete ACPI Tables issues from the OS side, with some people actually managing to get the IOMMU working in that specific scenario (Assuming that you somehow figure out what parameters to input).


After more than half a decade, BIOS modding has been unable to provide solutions to the lack of Firmware support for the IOMMU in these old Motherboards. In most cases the option isn't even in the Firmware code to begin with, so is not as simple as unhiding some option. Adding an option from scratch is something that I recall having seen done on a few rare occasions, but doing so for the IOMMU is far harder since it is not just to flip a Bit on the Processor or Chipset configuration registers, there is added complexity because the Firmware has to generate the IOMMU related ACPI Tables. Sadly, all attempts that I recall to add IOMMU support to a Motherboard that didn't had it to begin with, failed. The only good thing is that since Microsoft began in 2015 to use the IOMMU as part of its Device Guard feature in some Windows versions, every Motherboard had the IOMMU consistently working out-of-the-box.



4.8 - Configurable IGP Aperture Size

As everyone knowns, most Processors models of the consumer Intel platforms have integrated GPUs. A rather obscure option related to the integrated GPU is its Aperture Size, which allows the IGP to reserve an address range for its own usage that depending on Processor generation, goes from 128 MiB to 512 MiB (Haswell), 1 GiB (Broadwell) or 2 GiB (Skylake and later). Typically, increasing the size of the Aperture Size is completely worthless. However, Intel has a GPU virtualization feature, known as GVT-g, that allow you within a Linux or Xen host to create multiple GPU instances, so that you can give multiple VMs their own vGPU instead of crappy emulated ones. How many instances you can create depends on the reserved Aperture Size, so the higher it is, the more VMs can have a vGPU. Sadly, Motherboard manufacturers seems to think that no one has a reason to modify that option, so they don't display it and simply leave it at a hardcoded 256 MiB default.


Once again, BIOS modding could come to the rescue.



4.9 - User defined unique Firmware data

Most people, even those that spends a lot of time ranting about how the sole existence of the Intel ME and AMD PSP violates their privacy, are completely unaware that a given Motherboard may contain in its Firmware data that uniquely identifies that precise unit, so that a full Flash EEPROM dump will never be identical to the dump of another Motherboard from the exact same model, revision, and Firmware version. For example, the SMBIOS Serial Number and UUID values, and the MAC Address of the Chipset integrated MAC, are all contained in the Firmware, and are programmed at factory time. What privacy advocates should be worried about is that they are easily readable by Software. Windows typically uses these for activation purposes, and Motherboard manufacturers for warranty. There may be other applications that relies on those, probably for identification purposes, like Apple does for its cloud services. I personally consider the existence of unique Firmware data as a possible privacy risk. Regardless of how useful or harmful you believe than the unique data is, it is valuable because it is part of the Motherboard "factory defaults". Basically, without that data, you can't set up your Motherboard Firmware to be as it was when you unboxed it in a new condition.

Whenever you are recovering from a brick situation, it is possible that you lose that unique data. The proper procedure is to backup the corrupted Firmware, figure out where the unique data lies at, then copy it at the same place in the sane Firmware image that you're intending to flash. In some Motherboards you may have stickers in the PCB with that info so that you can rebuild the unique data even if you didn't backuped it first, but you need to figure out a way to add it back to a sane Firmware image. Failure to do so will result in the MAC Address reverting to a default (Or placeholder) like 88:88:88:88:87:88, Windows losing its activation status, and other identification or configuration problems.


My prefered solution to deal with this would be that a Motherboard unit gets assigned default unique values at factory time, while having both physical stickers in the PCB along with the values preprogrammed in the Firmware, which is mostly how it seems to be handled right now. However, they should be readable only by the Firmware itself at POST time so that it can display them to the user. The Firmware should provide builtin configuration tools that allows the user to either pass these values as-is, or spoof them in any way than the user sees fit. That is what user control means, right? In case that you're recovering and have to start everything from scratch, the Firmware should include documentation that says which values are unique to a specific unit and where they are located, as to reduce the time spend doing image comparisons just to figure out how many things are considered to be unique data if trying to reconstruct a Firmware image almost from scratch.



4.10 - Avoiding bloatware embedded in the Firmware via the WPBT ACPI Table

Microsoft defined an ACPI Table known as WPBT (Windows Platform Binary Table). This table contains executable files embedded into the Firmware itself, which Windows 8 and later automatically executes on every boot. The idea behind the WPBT is that it can be helpful for things like antitheft Software, since erasing the disk drive and doing a clean Windows installation wouldn't get rid of it (Wannabe thieves, remember to install Linux instead!), or to embed the Motherboard Device Drivers so that you can get a fully functional Windows installation even when completely offline, without the need to copy the files to an USB Flash Drive or slipstream them into the Windows install media.

As always, there is a problem somewhere. In this case, the issue is that loading the WPBT ACPI Table can't be disabled Windows side. As such, it is extremely prone to being misused or abused, serving as a major attack vector for malware should the Firmware ever become somehow compromised, and, as the second worst thing, for unavoidable OEM bloatware. Lenovo has used it for its LSE (Lenovo Service Engine) Software, and ASUS for the Armoury Crate in its recent ROG Motherboard series (At least the ASUS one can be disabled Firmware side so that it doesn't create the WPBT ACPI Table that Windows reads. Lenovo was forced by user backslash to provide a Firmware update that added an option for disabling it).


With an open source Firmware, you have the absolute power to use or not this highly exploitable feature. Somehow I think that it can be an extremely useful complement to unattended Windows installs, assuming it installs what YOU want it to.



4.11 - Better backwards compatibility with legacy OSes

Believe it or not, there are people that still wants to run Windows XP bare metal in latest generation systems. One of the problems with this approach is not Device Drivers support (Expect almost everything to not work!), but ACPI. The ACPI interface provided by current Motherboard Firmwares is simply too new, thus Windows XP BSODs at boot due to incompatible ACPI issues, making it completely unusable. The current solution is to hack the WXP acpi.sys so that it can boot.

While I consider running WXP bare metal ridiculous because you can achieve better results with virtualization (When I got into IOMMU virtualization and PCI Passthrough it was precisely to keep running ancient WXP on newer systems, but instead of having it running bare metal with all the implications than that has from a Driver perspective, using a Hypervisor/VMM acting as a Hardware abstraction layer makes it easier because you can rely on emulated/paravirtualized Drivers, limiting the need of native Driver support only for the passthroughed Video Card, as the host OS handles the rest. The same general idea has been tested with decent results for retro computing with Windows 9x and a passthroughed PCI Voodoo), an open source Firmware is a perfect fit for this use case, as instead of having to hack a propietary OS innards for newer ACPI versions to work, you can use a payload that provides a compatible ACPI interface so that it can work bare metal out-of-the-box.



4.12 - Embed whatever payload you want in the Firmware itself

Coreboot is not limited to just the previously mentioned standard payloads like the Linux Kernel, SeaBIOS and TianoCore. Since the Flash EEPROM is merely memory, you can literally put whatever you want in there (After the bare minimum required to initialize the platform, obviously), and use it in any way that you like. This is perhaps the most powerful yet underrated feature than Coreboot has. Here are some interesing projects that can showcase what sort of payloads you could embed...


A recent project was to embed a booteable Windows 3.1 floppy image within the Firmware image. This was done using Coreboot with SeaBIOS as a payload, as it has a Floppy Disk emulator that could be used to boot from such image (Besides that Windows 3.1 should be reliant on BIOS Services, so having SeaBIOS was unavoidable anyways). I find that this project has only a novelty value, as FreeDOS, which should be far more useful, is a commonly supported payload. This isn't precisely new, either, since from the late 80's there were some MS-DOS and other DOS compatible versions intended to be loaded from ROM.


A much more interesing project that is already more than a decade old is Coreboot AVATT (All Virtual All The Time), that used as a Coreboot payload an optimized Linux Kernel with KVM for virtualization and a small shell as an user interface. It should have been able to launch VMs with no additional help, but due to almost all links being dead, I didn't managed to see it in action (I'm still puzzled about how it relied just on standalone KVM, as it is typically paired with QEMU).

AVATT was extremely interesing back in the day, but nowadays its purpose can be achieved by other means. Historically, Servers that are used purely used for virtualization runs a thin bare metal Hypervisor like VMWare ESXi instead of a full blown OS installation, so they didn't required a lot of disk space to boot something that makes such system functional for its intended purpose. Actually, the Hypervisor could be entirely RAM resident, not touching the disk drive ever again after booting, assuming that logkeeping is disabled or that it is saved remotely. However, you still needed a disk drive to put the Hypervisor in and boot from it. Back then it was common to have a small HD/SSD as a Boot Drive, which wasted a Port and Chassis real estate that would be better used if the drive was in a RAID array (RAID works at the drive level, so even if the Hypervisor is small, you're losing an entire drive if you have to partition it just to get some room to install the Hypervisor). While embedding things in the Firmware is a possible solution, the modern way to deal with this problem is simply to use a SD Card or USB Flash Drive as Boot Drive, which is less wasteful that a full blown HD/SDD and doesn't require big Flash EEPROMs, either. Some Motherboards even have internal SD Card Slots or USB Ports for this purpose. Remote booting via PXE is also viable. This may not be as novelty as embedding a Linux Kernel with KVM, VFIO and QEMU in the same chip than the Firmware itself, but it gets the job done anyways, thus while the AVATT concept is attractive, the solution is not unique anymore.

Even then, there is still people interesed in a Firmware-based Hypervisor, as recently shown in a presentation during the 2019 Open Source Firmware Conference about embedding the Bareflank Hypervisor.


Embedding in the Firmware small stress test utilies like MemTest86 could also be quite convenient. In the particular case of MemTest86, it is already distributed as an EFI application, thus it can be directly loaded by TianoCore. Intel is also using an EFI application for its recently released auto overclocking tool known as Performance Maximizer, thus the Processor stress test is entirely performed from an independent pre-OS environment (I suppose than that application installs some files into a GPT formatted disk drive ESP, since the contents of that partition are viewable and directly loadable by an UEFI Firmware). Even that could theorically be embedded, assuming that Intel somehow gets rid of the other 16 GiB worth of baggage.



5 - CONCLUSION


I hope that all this makes a strong case about why open source Firmwares could be a killer feature for the Hardware enthusiast crowd, as most of the noise about this matter comes from privacy paranoid advocates that simply ignores the practical use cases that an open source Firmware may have. As much as BIOS modding already does to bypass restrictions, fix broken things or add some features, being able to work with clean source code instead of binary modules and reverse engineering tools will put far more power into modders hands than what they currently have. What I expect is that this can educate the consumer and generate enough demand as to push more people to pressure vendors to make open source Firmwares easier to implement, if not an out-of-the-box feature, like System76 and Purism are already doing for Notebooks.


Happy 2020!