linux tft lcd driver hdmi made in china
Proculus Technologies is the leading TFT LCD display manufacturer in the industry of embedded devices, focusing on All-in-one TFT LCDs including UARTs and Android Solutions. As a custom LCD and screen display manufacturer, Proculus can provide you with the custom LCD display, screen, and panel according to your demand. Now, we are focusing on exploring the world market and eager to provide the great products and services for the customer from all over the world. Proculus makes a complete and ever-improving LCD display solution for Intelligent displays that makes GUI development simple, cost-effective, and fast. Do not hurry to purchase the LCD products before contact Proculus.
Compatible with and can be directly inserted into all versions of raspberry PI motherboards (raspberry PI 1 generation B and Zero need additional HDMI cable)
After the installation of the LCD driver is completed, the system will restart automatically. If the LCD can be normally displayed and touched, the installation of the driver is successful
C. The retropie-rpi1_zero system cannot log in via SSH (no network port and wifi module). You need to copy the driver through the serial port. For details, see RaspberryPi Zero open serial instructions
After execution, the driver will be installed. The system will automatically restart, and the display screen will rotate 90 degrees to display and touch normally.
(" XXX-show " can be changed to the corresponding driver, and " 90 " can be changed to 0, 90, 180 and 270, respectively representing rotation angles of 0 degrees, 90 degrees, 180 degrees, 270 degrees)
The RPi LCD can be driven in two ways: Method 1. install a driver to your Raspbian OS. Method 2. use the Ready-to-use image file of which the LCD driver was pre-installed.
3) Connect the TF card to the Raspberry Pi, start the Raspberry Pi. The LCD will display after booting up, and then log in to the Raspberry Pi terminal,(You may need to connect a keyboard and HDMI LCD to Pi for driver installing, or log in remotely with SSH)
1. Executing apt-get upgrade will cause the LCD to fail to work properly. In this case, you need to edit the config.txt file in the SD card and delete this sentence: dtoverlay=ads7846.
This LCD can be calibrated through the xinput-calibrator program. Note: The Raspberry Pi must be connected to the network, or else the program won"t be successfully installed.
This LCD can support Raspberry Pi OS / Ubuntu / Kali / Retropie systems. When the LCD works on systems such as Raspberry Pi OS, the resolution must be set manually, otherwise, it will cause an abnormal display.
8) Connect the HDMI interface of the LCD to the HDMI interface of the Raspberry Pi, power on the Raspberry Pi, and wait for a few seconds until the LCD displays normally.
If you use the Buster branch system, you can use it according to the above configuration. But if you are using the Bullseye branch system, you need to modify the default KMS driver to FKMS driver for displaying the system desktop normally.
If you need to use the CSI camera under the Bullseye branch system. Since this branch uses the libcamera camera library by default, the library doesn"t support FKMS drivers.
KING TECH is a TFT LCD IPS supplier solution specialist since 2003, we are the group company combined byAn Innolux authorized LCD panel&IC distribution company
We Provide Different Kinds of Custom TFT Display ServicesIf needed we can make custom size tft displays for customers, we have a good relationship with original TFT display module factories, and we can negotiate with them to tool up an LCD panel mask. The tooling cost will be very high and paid by the end customer, and MOQ is at least 25K/lot.
We are capable to change every structure of the TFT display module. To increase backlight brightness and make it sunlight readable, the highest brightness we’ve ever reached was 6500cd/m2.To change the display FPC shape and length. To customize a resistive touch panel(RTP) or capacitive touch panel(CTP/PCAP), we have a long-term cooperation supplier to work with us on such tooling, for CTP, we can also make different shapes and thicknesses of cover glass, single touch, and multi-touch, AG/AR/AF is also available.
With our own PCBA hardware& software design company, we can design different kinds of TFT display modules for our customers, from simple convert boards to complete motherboards, from HDMI driver boards to Android controller boards, from non-touch function boards to capacitive touch function boards, they are all part of our working.
We have our own TFT display module panel and driver distribution department, if you want to switch to another structure of display, we can also help, cause we know which TFT display module panel and the driver is more match, and which suit’s supply is more stable, which one we can get the lowest price.
In order to give the customer the best support, Kingtech, as one of the best TFT LCD IPS suppliers in China, also can provide industrial solutions such as developing a mother board, serial port UART board, T-CON board, HDMI board, and monitor according to the customer"s requirements.
Kingtech also has existing industrial solutions for the PV135 motherboard, PV901 Linux board, and PV804 motherboard. They can be connected between Raspberry pi and our TFT display module, which can make them work together.
For serial port UART board, Kingtech has a 2.8inch 240x320 LCD with serial port UART board, 3.5inch 320x480 module with serial port UART board, 4.3inch 480x272 display with a resistive touch with serial port UART board, 7inch 800x480/1024x600 TFT with capacitive touch with serial port UART board.
For exisiting monitor products, Kingtech has 8inch 1280x800 IPS monitor, 10.1inch 1280x800 monitor, 15.6inch 1280x800 LCD monitor, 12.3inch 1920x720 IPS 850nit LCD monitor, 18.5inch 1366x768 1000nits LCD monitor.
For the HDMI board, Kingtech has a 1.39inch 454x454 AMOLED round with HDMI board, 3.34inch 320x320 TFT round with HDMI board, 3.4inch 800x800 TFT round with HDMI board, 5inch 1080x1080 TFT with HDMI board, 4.3inch 800x480 TFT with HDMI board, 5inch 800x480 LCD with HDMI board, 7inch 800x480/1024x600 LCD display with HDMI board, 10.1inch 1280x800 LCD module with HDMI board.
Above all TFT display modules with board products can be used for industrial equipment, medical, smart-home, or others. Kingtech can also have industrial custom TFT display solutions according to the customer’s requirements. Ware is welcome to contact us. If you are interested in any tft display module products, we can negotiate with you at a reasonable TFT LCD display price. Thank you.
TFT display module is a Thin Film Transistor, and AMOLED is Active-matrix organic light-emitting Display. The TFT display module is backlight-on the liquid crystal panel; AMOLED is a panel that emits light on its own; TFT display module structure is more thick and strong, AMOLED is very thin and also weak, TFT display module is used widely than AMOLED, AMOLED is used in consumer products the most, like a smartwatch, mobile phone, and TV.
IPS is In-Plane Switching, It is also known as free viewing angle, which means the viewing angle of the display on 4 sides is the same, a normal display has its best viewing angle like 6 o’clock or 12 o’clock. While the TFT display module contains normal viewing angles and IPS display,IPS display is a kind of TFT display module.
TFT display module belongs to LCD, LCD is Liquid Crystal Display, it contains mono(single color) LCD and color LCD, single color LCD is barely used now, and color LCD has STN and TFT two types. Therefore, TFT display module is a kind of LCD display.
OLED is Organic Light Emitting Display, it is a display that emits light on its own, and it does not need an extra backlight, so it requests lower power consumption than TFT display module but its lifetime is shorter than TFT(5000 hours), AMOLED is a kind of OLED but it is more colorful. TFT display module requests a backlight to light on and power consumption are higher than OLED, but its lifetime is much longer(20000 hours).
The LED display is working by lighting up the LED lights, the TFT display module is lighted up by the backlight and the liquid crystal starts to work and shows contents. TFT display module has brighter and more true color, and lower price and LED display has lower power consumption, smaller heat, and longer lifetime.
Compares to other types of display, TFT display module is the more widely used, it can be made in different shapes and sizes, from very small sizes to big sizes. The resolution now is higher and higher, and the price of custom TFT display modules is more and more competitive. Its lifetime is longer than the OLED display, and its color is brighter than OLED.
A number of people have used a Motorola Atrix Lapdock to add a screen and keyboard with trackpad to RasPi, in essence building a RasPi-based laptop computer. Lapdock is a very clever idea: you plug your Atrix smart phone into Lapdock and it gives you an 11.6" 1366 x 768 HDMI monitor with speakers, a keyboard with trackpad, two USB ports, and a large enough battery for roughly 5 hours of use. The smart phone acts as a motherboard with "good enough" performance. The advantage over a separate laptop or desktop computer is that you have one computing device so you don"t need to transfer files between your phone and your desk/laptop.
Unfortunately for Motorola, Lapdock was not successful (probably because of its US$500 list price) and Motorola discontinued it and sold remaining stock at deep discounts, with many units selling for US$50-100. This makes it a very attractive way to add a modest size HDMI screen to RasPi, with a keyboard/trackpad and rechargeable battery power thrown in for free.
Lapdock has two connectors that plug into an Atrix phone: a Micro HDMI D plug for carrying video and sound, and a Micro USB plug for charging the phone and connecting to the Lapdock"s internal USB hub, which talks to the Lapdock keyboard, trackpad, and two USB ports. With suitable cables and adapters, these two plugs can be connected to RasPi"s full-size HDMI connector and one of RasPi"s full-size USB A ports.
The hardest part about connecting Lapdock is getting the cables and adapters. Most HDMI and USB cables are designed to plug into jacks, whereas the Lapdock has plugs so the cables/adapters must have Micro HDMI and Micro USB female connections. These are unusual cables and adapters, so check the links.
Lapdock uses the HDMI plug to tell if a phone is plugged in by seeing if the HDMI DDC/CEC ground pin is pulled low. If it"s not, Lapdock is powered off. As soon as you plug in a phone or RasPi, all the grounds short together and Lapdock powers itself on. However, it only does this if the HDMI cable actually connects the DDC/CEC ground line. Many cheap HDMI cables do not include the individual ground lines, and rely on a foil shield connected to the outer shells on both ends. Such a cable will not work with an unmodified Lapdock. There is a detailed "blog entry on the subject at element14: Raspberry Pi Lapdock HDMI cable work-around. The "blog describes a side-benefit of this feature: you can add a small power switch to Lapdock so you can leave RasPi attached all the time without draining the battery.
When you do not connect a HDMI monitor, the GPU in the PI will simply rescale (http://en.wikipedia.org/wiki/Image_scaling) anything that would have appeared on the HDMI screen to a resolution suitable for the TV standard chosen, (PAL or NTSC) and outputs it as a composite video signal.
The Broadcom BCM2835 only provides HDMI output and composite output. RGB and other signals needed by RGB, S-VIDEO or VGA connectors are however not provided, and the R-PI also isn"t designed to power an unpowered converter box.
Note that any conversion hardware that converts HDMI/DVI-D signals to VGA (or DVI-A) signals may come with either an external PSU, or expects power can be drawn from the HDMI port. In the latter case the device may initially appear to work, but there will be a problem, as the HDMI specs only provide in a maximum of 50mA (@ 5 Volt) from the HDMI port, but all of these adapters try to draw much more, up-to 500mA, in case of the R-PI there is a limit of 200mA that can be drawn safely, as 200mA is the limit for the BAT54 diode (D1) on the board. Any HDMI to VGA adapter without external PSU might work for a time, but then burn out D1, therefore Do not use HDMI converters powered by the HDMI port!
Alternatively, it may be possible to design an expansion board that plugs into the LCD headers on the R.Pi. Here is something similar for Beagleboard:
AdvaBoard RPi1: Raspberry Pi multifunction extension board, incl. an interface and software for 3.2"/5"/7" 16-bit parallel TFT-displays incl. touchscreen with up to 50 frames/s (3.2", 320x240)
Texy"s 2.8" TFT + Touch Shield Board: HY28A-LCDB display with 320 x 240 resolution @ 10 ~ 20fps, 65536 colors, assembled and tested £24 plus postage, mounts on GPIO pins nicely matching Pi board size, or via ribbon cable
Alibaba.com offers 390 hdmi board to mipi products. About 42% % of these are lcd modules, 1%% are integrated circuits (old), and 1%% are audio & video cables.
After execution, the driver will be installed. The system will automatically restart, and the display screen will rotate 90 degrees to display and touch normally.
( " XXX-show " can be changed to the corresponding driver, and " 90 " can be changed to 0, 90, 180 and 270, respectively representing rotation angles of 0 degrees, 90 degrees, 180 degrees, 270 degrees)
After I published my $1 MCU write-up, several readers suggested I look at application processors — the MMU-endowed chips necessary to run real operating systems like Linux. Massive shifts over the last few years have seen internet-connected devices become more featureful (and hopefully, more secure), and I’m finding myself putting Linux into more and more places.
This article is targeted at embedded engineers who are familiar with microcontrollers but not with microprocessors or Linux, so I wanted to put together something with a quick primer on why you’d want to run embedded Linux, a broad overview of what’s involved in designing around application processors, and then a dive into some specific parts you should check out — and others you should avoid — for entry-level embedded Linux systems.
If my mantra for the microcontroller article was that you should pick the right part for the job and not be afraid to learn new software ecosystems, my argument for this post is even simpler: once you’re booted into Linux on basically any of these parts, they become identical development environments.
That makes chips running embedded Linux almost a commodity product: as long as your processor checks off the right boxes, your application code won’t know if it’s running on an ST or a Microchip part — even if one of those is a brand-new dual-core Cortex-A7 and the other is an old ARM9. Your I2C drivers, your GPIO calls — even your V4L-based image processing code — will all work seamlessly.
As a result, the boards I built for this review are akin to the notes from your high school history class or a recording you made of yourself practicing a piece of music to study later. So while I’ll post pictures of the boards and screenshots of layouts to illustrate specific points, these aren’t intended to serve as reference designs or anything; the whole point of the review is to get you to a spot where you’ll want to go off and design your own little Linux boards. Teach a person to fish, you know?
Coming from microcontrollers, the first thing you’ll notice is that Linux doesn’t usually run on Cortex-M, 8051, AVR, or other popular microcontroller architectures. Instead, we use application processors — popular ones are the Arm Cortex-A, ARM926EJ-S, and several MIPS iterations.
The biggest difference between these application processors and a microcontroller is quite simple: microprocessors have a memory management unit (MMU), and microcontrollers don’t. Yes, you can run Linux without an MMU, but you usually shouldn’t: Cortex-M7 parts that can barely hit 500 MHz routinely go for double or quadruple the price of faster Cortex-A7s. They’re power-hungry: microcontrollers are built on larger processes than application processors to reduce their leakage current. And without an MMU and generally-low clock speeds, they’re downright slow.
When your microcontroller project outgrows its super loop and the random ISRs you’ve sprinkled throughout your code with care, there are many bare-metal tasking kernels to turn to — FreeRTOS, ThreadX (now Azure RTOS), RT-Thread, μC/OS, etc. By an academic definition, these are operating systems. However, compared to Linux, it’s more useful to think of these as a framework you use to write your bare-metal application inside. They provide the core components of an operating system: threads (and obviously a scheduler), semaphores, message-passing, and events. Some of these also have networking, filesystems, and other libraries.
Comparing bare-metal RTOSs to Linux simply comes down to the fundamental difference betweenthese and Linux: memory management and protection. This one technical difference makes Linux running on an application processor behave quite differently from your microcontroller running an RTOS.((Before the RTOS snobs attack with pitchforks, yes, there are large-scale, well-tested RTOSes that are usually run on application processors with memory management units. Look at RTEMS as an example. They don’t have some of the limitations discussed below, and have many advantages over Linux for safety-critical real-time applications.))
Because Linux-capable application processors have a memory management unit, *alloc() calls execute swiftly and reliably. Physical memory is only reserved (faulted in) when you actually access a memory location. Memory fragmentation is much less an issue since Linux frees and reorganizes pages behind the scenes. Plus, switching to Linux provides easier-to-use diagnostic tools (like valgrind) to catch bugs in your application code in the first place. And finally, because applications run in virtual memory, if your app does have memory bugs in it, Linux will kill it — leaving the rest of your system running. ((As a last-ditch kludge, it’s not uncommon to call your app in a superloop shell script to automatically restart it if it crashes without having to restart the entire system.))
Running something like lwIP under FreeRTOS on a bare-metal microcontroller is acceptable for a lot of simple applications, but application-level network services like HTTP can burden you to implement in a reliable fashion. Stuff that seems simple to a desktop programmer — like a WebSockets server that can accept multiple simultaneous connections — can be tricky to implement in bare-metal network stacks. Because C doesn’t have good programming constructs for asynchronous calls or exceptions, code tends to contain either a lot of weird state machines or tons of nested branches. It’s horrible to debug problems that occur. In Linux, you get a first-class network stack, plus tons of rock-solid userspace libraries that sit on top of that stack and provide application-level network connectivity. Plus, you can use a variety of high-level programming languages that are easier to handle the asynchronous nature of networking.
Somewhat related is the rest of the standards-based communication / interface frameworks built into the kernel. I2S, parallel camera interfaces, RGB LCDs, SDIO, and basically all those other scary high-bandwidth interfaces seem to come together much faster when you’re in Linux. But the big one is USB host capabilities. On Linux, USB devices just work. If your touchscreen drivers are glitching out and you have a client demo to show off in a half-hour, just plug in a USB mouse until you can fix it (I’ve been there before). Product requirements change and now you need audio? Grab a $20 USB dongle until you can respin the board with a proper audio codec. On many boards without Ethernet, I just use a USB-to-Ethernet adapter to allow remote file transfer and GDB debugging. Don’t forget that, at the end of the day, an embedded Linux system is shockingly similar to your computer.
Secure boot isn’t available on every application processor reviewed here, it’s much more common. While there are still vulnerabilities that get disclosed from time to time, my non-expert opinion is that the implementations seem much more robust than on Cortex-M parts: boot configuration data and keys are stored in one-time-programmable memory that is not accessible from non-privileged code. Network security is also more mature and easier to implement using Linux network stack and cryptography support, and OP-TEE provides a ready-to-roll secure environment for many parts reviewed here.
Imagine that you needed to persist some configuration data across reboot cycles. Sure, you can use structs and low-level flash programming code, but if this data needs to be appended to or changed in an arbitrary fashion, your code would start to get ridiculous. That’s why filesystems (and databases) exist. Yes, there are embedded libraries for filesystems, but these are way clunkier and more fragile than the capabilities you can get in Linux with nothing other than ticking a box in menuconfig. And databases? I’m not sure I’ve ever seen an honest attempt to run one on a microcontroller, while there’s a limitless number available on Linux.
In a bare-metal environment, you are limited to a single application image. As you build out the application, you’ll notice things get kind of clunky if your system has to do a few totally different things simultaneously. If you’re developing for Linux, you can break this functionality into separate processes, where you can develop, debug, and deploy separately as separate binary images.
Bare-metal MCU development is primarily done in C and C++. Yes, there are interesting projects to run Python, Javascript, C#/.NET, and other languages on bare metal, but they’re usually focused on implementing the core language only; they don’t provide a runtime that is the same as a PC. And even their language implementation is often incompatible. That means your code (and the libraries you use) have to be written specifically for these micro-implementations. As a result, just because you can run MicroPython on an ESP32 doesn’t mean you can drop Flask on it and build up a web application server. By switching to embedded Linux, you can use the same programming languages and software libraries you’d use on your PC.
In Linux, there is a hard separation between userspace calls and the underlying hardware driver code. One key advantage of this is how easy it is to move from one hardware platform to another; it’s not uncommon to only have to change a couple of lines of code to specify the new device names when porting your code.
Yes, you can poke GPIO pins, perform I2C transactions, and fire off SPI messages from userspace in Linux, and there are some good reasons to use these tools during diagnosing and debugging. Plus, if you’re implementing a custom I2C peripheral device on a microcontroller, and there’s very little configuration to be done, it may seem silly to write a kernel driver whose only job is to expose a character device that basically passes on whatever data directly to the I2C device you’ve built.
But if you’re interfacing with off-the-shelf displays, accelerometers, IMUs, light sensors, pressure sensors, temperature sensors, ADCs, DACs, and basically anything else you’d toss on an I2C or SPI bus, Linux already has built-in support for this hardware that you can flip on when building your kernel and configure in your DTS file.
Sleep-mode power consumption. First, the good news: active mode power consumption of application processors is quite good when compared to microcontrollers. These parts tend to be built on smaller process nodes, so you get more megahertz for your ampere than the larger processes used for Cortex-M devices. Unfortunately, embedded Linux devices have a battery life that’s measured in hours or days, not months or years.
Boot time. Embedded Linux systems can take several seconds to boot up, which is orders of magnitude longer than a microcontroller’s start-up time. Alright, to be fair, this is a bit of an apples-to-oranges comparison: if you were to start initializing tons of external peripherals, mount a filesystem, and initialize a large application in an RTOS on a microcontroller, it could take several seconds to boot up as well. While boot time is a culmination of tons of different components that can all be tweaked and tuned, the fundamental limit is caused by application processors’ inability to execute code from external flash memory; they must copy it into RAM first ((unless you’re running an XIP kernel)).
Responsiveness. By default, Linux’s scheduler and resource system are full of unbounded latencies that under weird and improbable scenarios may take a long time to resolve (or may actually never resolve). Have you ever seen your mouse lock up for 3 seconds randomly? There you go. If you’re building a ventilator with Linux, think carefully about that. To combat this, there’s been a PREEMPT_RT patch for some time that turns Linux into a real-time operating system with a scheduler that can basically preempt anything to make sure a hard-real-time task gets a chance to run.
Also, when many people think they need a hard-real-time kernel, they really just want their code to be low-jitter. Coming from Microcontrollerland, it feels like a 1000 MHz processor should be able to bit-bang something like a 50 kHz square wave consistently, but you would be wrong. The Linux scheduler is going to give you something on the order of ±10 µs of jitter for interrupts, not the ±10 ns jitter you’re used to on microcontrollers. This can be remedied too, though: while Linux gobbles up all the normal ARM interrupt vectors, it doesn’t touch FIQ, so you can write custom FIQ handlers that execute completely outside of kernel space.
Figuring out system requirements for your software frameworks can be rather unintuitive. For example, doing a multi-touch-capable finger-painting app in Qt 5 is actually much less of a resource hog than running a simple backend server for a web app written in a modern stack using a JIT-compiled language. Many developers familiar with traditional Linux server/desktop development assume they’ll just throw a .NET Core web app on their rootfs and call it a day — only to discover that they’ve completely run out of RAM, or their app takes more than five minutes to launch, or they discover that Node.js can’t even be compiled for the ARM9 processor they’ve been designing around.
Slower ARM9 cores are for simple headless gadgets written in C/C++.Yes, you can run basic, animation-free low-resolution touch linuxfb apps with these, but blending and other advanced 2D graphics technology can really bog things down. And yes, you can run very simple Python scripts, but in my testing, even a “Hello, World!” Flask app took 38 seconds from launch to actually spitting out a web page to my browser on a 300 MHz ARM9. Yes, obviously once the Python file was compiled, it was much faster, but you should primarily be serving up static content using lightweight HTTP servers whenever possible. And, no, you can’t even compile Node.JS or .NET Core for these architectures. These also tend to boot from small-capacity SPI flash chips, which limits your framework choices.
I know that there are lots of people — especially hobbyists but even professional engineers — who have gotten to this point in the article and are thinking, “I do all my embedded Linux development with Raspberry Pi boards — why do I need to read this?” Yes, Raspberry Pi single-board computers, on the surface, look similar to some of these parts: they run Linux, you can attach displays to them, do networking, and they have USB, GPIO, I2C, and SPI signals available.
And for what it’s worth, the BCM2711 mounted on the Pi 4 is a beast of a processor and would easily best any part in this review on that measure. Dig a bit deeper, though: this processor has video decoding and graphics acceleration, but not even a single ADC input. It has built-in HDMI transmitters that can drive dual 4k displays, but just two PWM channels. This is a processor that was custom-made, from the ground up, to go into smart TVs and set-top boxes — it’s not a general-purpose embedded Linux application processor, so it isn’t generally suited for embedded Linux work.
It might be the perfect processor for your particular project, but it probably isn’t; forcing yourself to use a Pi early in the design process will over-constrain things. Yes, there are always workarounds to the aforementioned shortcomings — like I2C-interfaced PWM chips, SPI-interfaced ADCs, or LCD modules with HDMI receivers — but they involve external hardware that adds power, bulk, and cost. If you’re building a quantity-of-one project and you don’t care about these things, then maybe the Pi is the right choice for the job, but if you’re prototyping a real product that’s going to go into production someday, you’ll want to look at the entire landscape before deciding what’s best.
This article is all about getting an embedded application processor booting Linux — not building an entire embedded system. If you’re considering running Linux in an embedded design, you likely have some combination of Bluetooth, WiFi, Ethernet, TFT touch screen, audio, camera, or low-power RF transceiver work going on.
If you’re coming from the MCU world, you’ll have a lot of catching up to do in these areas, since the interfaces (and even architectural strategies) are quite different. For example, while single-chip WiFi/BT MCUs are common, very few application processors have integrated WiFi/BT, so you’ll typically use external SDIO- or USB-interfaced chipsets. Your SPI-interfaced ILI9341 TFTs will often be replaced with parallel RGB or MIPI models. And instead of burping out tones with your MCU’s 12-bit DAC, you’ll be wiring up I2S audio CODECs to your processor.
Processor vendors vigorously encourage reference design modification and reuse for customer designs. I think most professional engineers are most concerned with getting Rev A hardware that boots up than playing around with optimization, so many custom Linux boards I see are spitting images of off-the-shelf EVKs.
Most MPUs can boot from SPI NOR flash, SPI NAND flash, parallel, or MMC (for use with eMMC or MicroSD cards). Because of its organization, NOR flash memory has better read speeds but worse write speeds than NAND flash. SPI NOR flash memory is widely used for tiny systems with up to 16 MB of storage, but above that, SPI NAND and parallel-interfaced NOR and NAND flash become cheaper. Parallel-interfaced NOR flash used to be the ubiquitous boot media for embedded Linux devices, but I don’t see it deployed as much anymore — even though it can be found at sometimes half the price of SPI flash. My only explanation for its unpopularity is that no one likes wasting lots of I/O pins on parallel memory.
Unlike MCU-based designs, on an embedded Linux system, you absolutely, positively, must have a console UART available. Linux’s entire tracing architecture is built around logging messages to a console, as is the U-Boot bootloader.
That doesn’t mean you shouldn’t also have JTAG/SWD access, especially in the early stage of development when you’re bringing up your bootloader (otherwise you’ll be stuck with printf() calls). Having said that, if you actually have to break out your J-Link on your embedded Linux board, it probably means you’re having a really bad day. While you can attach a debugger to an MPU, getting everything set up correctly is extremely clunky when compared to debugging an MCU. Prepare to relocate symbol tables as your code transitions from SRAM to main DRAM memory. It’s not uncommon to have to muck around with other registers, too (like forcing your CPU out of Thumb mode). And on top of that, I’ve found that some U-Boot ports remux the JTAG pins (either due to alternate functionality or to save power), and the JTAG chains on some parts are quite complex and require using less-commonly used pins and features of the interface. Oh, and since you have an underlying Boot ROM that executes first, JTAG adapters can screw that up, too.
When most people think of DDR routing, length-tuning is the first thing that comes to mind. If you use a decent PCB design package, setting up length-tuning rules and laying down meandered routes is so trivial to do that most designers don’t think anything of it — they just go ahead and length-match everything that’s relatively high-speed — SDRAM, SDIO, parallel CSI / LCD, etc. Other than adding a bit of design time, there’s no reason not to maximize your timing margins, so this makes sense.
For the data groups, DDR3 uses on-die termination (ODT), configurable for 40, 60, or 120 ohm on memory chips (and usually the same or similar on the CPU) along with adjustable output impedance drivers. ODT is only enabled on the receiver’s end, so depending on whether you’re writing data or reading data, ODT will either be enabled on the memory chip, or on the CPU.
When building embedded Linux systems, we need to start by compiling all the off-the-shelf software we plan on running — the bootloader, kernel, and userspace libraries and applications. We’ll have to write and customize shell scripts and configuration files, and we’ll also often write applications from scratch. It’s really a totally different development process, so let’s talk about some prerequisites.
If you want to build a software image for a Linux system, you’ll need a Linux system. If you’re also the person designing the hardware, this is a bit of a catch-22 since most PCB designers work in Windows. While Windows Subsystem for Linux will run all the software you need to build an image for your board, WSL currently has no ability to pass through USB devices, so you won’t be able to use hardware debuggers (or even a USB microSD card reader) from within your Linux system. And since WSL2 is Hyper-V-based, once it’s enabled, you won’t be able to launch VMware, which uses its own hypervisor((Though a beta versions of VMWare will address this)).
Consequently, I recommend users skip over all the newfangled tech until it matures a bit more, and instead just spin up an old-school VMWare virtual machine and install Linux on it. In VMWare you can pass through your MicroSD card reader, debug probe, and even the device itself (which usually has a USB bootloader).
Building images is a computationally heavy and highly-parallel workload, so it benefits from large, high-wattage HEDT/server-grade multicore CPUs in your computer — make sure to pass as many cores through to your VM as possible. Compiling all the software for your target will also eat through storage quickly: I would allocate an absolute minimum of 200 GB if you anticipate juggling between a few large embedded Linux projects simultaneously.
While your specific project will likely call for much more software than this, these are the five components that go into every modern embedded Linux system((Yes, there are alternatives to these components, but the further you move away from the embedded Linux canon, the more you’ll find yourself on your own island, scratching your head trying to get things to work.)):
A cross toolchain, usually GCC + glibc, which contains your compiler, binutils, and C library. This doesn’t actually go into your embedded Linux system, but rather is used to build the other components.
As you’re reading through this, don’t get overwhelmed: if your hardware is reasonably close to an existing reference design or evaluation kit, someone has already gone to the trouble of creating default configurations for you for all of these components, and you can simply find and modify them. As an embedded Linux developer doing BSP work, you’ll spend way more time reading other people’s code and modifying it than you will be writing new software from scratch.
Just like with microcontroller development, when working on embedded Linux projects, you’ll write and compile the software on your computer, then remotely test it on your target. When programming microcontrollers, you’d probably just use your vendor’s IDE, which comes with a cross toolchain — a toolchain designed to build software for one CPU architecture on a system running a different architecture. As an example, when programming an ATTiny1616, you’d use a version of GCC built to run on your x64 computer but designed to emit AVR code. With embedded Linux development, you’ll need a cross toolchain here, too (unless you’re one of the rare types coding on an ARM-based laptop or building an x64-powered embedded system).
Unfortunately, our CPU’s boot ROM can’t directly load our kernel. Linux has to be invoked in a specific way to obtain boot arguments and a pointer to the device tree and initrd, and it also expects that main memory has already been initialized. Boot ROMs also don’t know how to initialize main memory, so we would have nowhere to store Linux. Also, boot ROMs tend to just load a few KB from flash at the most — not enough to house an entire kernel. So, we need a small program that the boot ROM can load that will initialize our main memory and then load the entire (usually-multi-megabyte) Linux kernel and then execute it.
U-Boot has to know a lotof technical details about your system. There’s a dedicated board.c port for each supported platform that initializes clocks, DRAM, and relevant memory peripherals, along with initializing any important peripherals, like your UART console or a PMIC that might need to be configured properly before bringing the CPU up to full speed. Newer board ports often store at least some of this configuration information inside a Device Tree, which we’ll talk about later. Some of the DRAM configuration data is often autodetected, allowing you to change DRAM size and layout without altering the U-Boot port’s code for your processor ((If you have a DRAM layout on the margins of working, or you’re using a memory chip with very different timings than the one the port was built for, you may have to tune these values)). You configure what you want U-Boot to do by writing a script that tells it which device to initialize, which file/address to load into which memory address, and what boot arguments to pass along to Linux. While these can be hard-coded, you’ll often store these names and addresses as environmental variables (the boot script itself can be stored as a bootcmd environmental variable). So a large part of getting U-Boot working on a new board is working out the environment.
Here’s the headline act. Once U-Boot turns over the program counter to Linux, the kernel initializes itself, loads its own set of device drivers((Linux does notcall into U-Boot drivers the way that an old PC operating system like DOS makes calls into BIOS functions.)) and other kernel modules, and calls your init program.
To get your board working, the necessary kernel hacking will usually be limited to enabling filesystems, network features, and device drivers — but there are more advanced options to control and tune the underlying functionality of the kernel.
Turning drivers on and off is easy, but actually configuring these drivers is where new developers get hung up. One big difference between embedded Linux and desktop Linux is that embedded Linux systems have to manually pass the hardware configuration information to Linux through a Device Tree file or platform data C code, since we don’t have EFI or ACPI or any of that desktop stuff that lets Linux auto-discover our hardware.
We need to tell Linux the addresses and configurations for all of our CPU’s fancy on-chip peripherals, and which kernel modules to load for each of them. You may think that’s part of the Linux port for our CPU, but in Linux’s eyes, even peripherals that are literally inside our processor— like LCD controllers, SPI interfaces, or ADCs — have nothing to do with the CPU, so they’re handled totally separately as device drivers stored in separate kernel modules.
And then there’s all the off-chip peripherals on our PCB. Sensors, displays, and basically all other non-USB devices need to be manually instantiated and configured. This is how we tell Linx that there’s an MPU6050 IMU attached to I2C0 with an address of 0x68, or an OV5640 image sensor attached to a MIPI D-PHY. Many device drivers have additional configuration information, like a prescalar factor, update rate, or interrupt pin use.
The old way of doing this was manually adding C structs to a platform_data C file for the board, but the modern way is with a Device Tree, which is a configuration file that describes every piece of hardware on the board in a weird quasi-C/JSONish syntax. Each logical piece of hardware is represented as a node that is nested under its parent bus/device; its node is adorned with any configuration parameters needed by the driver.
A DTS file is not compiled into the kernel, but rather, into a separate .dtb binary blob file that you have to deal with (save to your flash memory, configure u-boot to load, etc)((OK, I lied. You can actually append the DTB to the kernel so U-Boot doesn’t need to know about it. I see this done a lot with simple systems that boot from raw Flash devices.)). I think beginners have a reason to be frustrated at this system, since there’s basically two separate places you have to think about device drivers: Kconfig and your DTS file, and if these get out of sync, it can be frustrating to diagnose, since you won’t get a compilation error if your device tree contains nodes that there are no drivers for, or if your kernel is built with a driver that isn’t actually referenced for in the DTS file, or if you misspell a property or something (since all bindings are resolved at runtime).
Once Linux has finished initializing, it runs init. This is the first userspace program invoked on start-up. Our init program will likely want to run some shell scripts, so it’d be nice to have a sh we can invoke. Those scripts might touch or echo or cat things. It looks like we’re going to need to put a lot of userspace software on our root filesystem just to get things to boot — now imagine we want to actually login (getty), list a directory (ls), configure a network (ifconfig), or edit a text file (vi, emacs, nano, vim, flamewars ensue).
BusyBox configuration is obvious and uses the same Kconfig-based system that Linux and U-Boot use. You simply tell it which packages (and options) you wish to build the binary image with. There’s not much else to say — though a minor “gotcha” for new users is that the lightweight versions of these tools often have fewer features and don’t always support the same syntax/arguments.
Linux requires a root filesystem; it needs to know where the root filesystem is and what filesystem format it uses, and this parameter is part of its boot arguments.
If the preceding section made you dizzy, don’t worry: there’s really no reason to hand-configure and hand-compile all of that stuff individually. Instead, everyone uses build systems — the two big ones being Yocto and Buildroot — to automatically fetch and compile a full toolchain, U-Boot, Linux kernel, BusyBox, plus thousands of other packages you may wish, and install everything into a target filesystem ready to deploy to your hardware.
Yes, on their own, both U-Boot and Linux have defconfigs that do the heavy lifting: For example, by using a U-Boot defconfig, someone has already done the work for you in configuring U-Boot to initialize a specific boot media and boot off it (including setting up the SPL code, activating the activating the appropriate peripherals, and writing a reasonable U-Boot environment and boot script).
But the build system default configurations go a step further and integrate all these pieces together. For example, assume you want your system to boot off a MicroSD card, with U-Boot written directly at the beginning of the card, followed by a FAT32 partition containing your kernel and device tree, and an ext4 root filesystem partition. U-Boot’s defconfig will spit out the appropriate bin file to write to the SD card, and Linux’s defconfig will spit out the appropriate vmlinuz file, but it’s the build system itself that will create a MicroSD image, write U-Boot to it, create the partition scheme, format the filesystems, and copy the appropriate files to them. Out will pop an “image.sdcard” file that you can write to a MicroSD card.
Buildroot started as a bunch of Makefiles strung together to test uClibc against a pile of different commonly-used applications to help squash bugs in the library. Today, the infrastructure is the same, but it’s evolved to be the easiest way to build embedded Linux images.
By using the same Kconfig system used in Linux, U-Boot, and BusyBox, you configure everything — the target architecture, the toolchain, Linux, U-Boot, target packages, and overall system configuration — by simply running make menuconfig. It ships with tons of canned defconfigs that let you get a working image for your dev board by loading that config and running make. For example, make raspberrypi3_defconfig && make will spit out an SD card image you can use to boot your Pi off of.
Buildroot can also pass you off to the respective Kconfigs for Linux, U-Boot, or BusyBox — for example, running make linux-menuconfig will invoke the Linux menuconfig editor from within the Buildroot directory. I think beginners will struggle to know what is a Buildroot option and what is a Linux kernel or U-Boot option, so be sure to check in different places.
And, honestly, for run-and-gun projects, you probably won’t even bother creating an official board or defconfig — you’ll just hack at the existing ones. We can do this because Buildroot is crafty in lots of good ways designed to make it easy to make stuff work. For starters, most of the relevant settings are part of the defconfig file that can easily be modified and saved — for very simple projects, you won’t have to make further modifications. Think about toggling on a device driver: in Buildroot, you can invoke Linux’s menuconfig, modify things, save that config back to disk, and update your Buildroot config file to use your local Linux config, rather the one in the source tree. Buildroot knows how to pass out-of-tree DTS files to the compiler, so you can create a fresh DTS file for your board without even having to put it in your kernel source tree or create a machine or anything. And if you do need to modify the kernel source, you can hardwire the build process to bypass the specified kernel and use an on-disk one (which is great when doing active development).
The chink in the armor is that Buildroot is brain-dead at incremental builds. For example, if you load your defconfig, make, and then add a package, you can probably just run make again and everything will work. But if you change a package option, running make won’t automatically pick that up, and if there are other packages that need to be rebuilt as a result of that upstream dependency, Buildroot won’t rebuild those either. You can use the make [package]-rebuild target, but you have to understand the dependency graph connecting your different packages. Half the time, you’ll probably just give up and do make clean && make ((Just remember to save your Linux, U-Boot, and BusyBox configuration modifications first, since they’ll get wiped out.)) and end up rebuilding everything from scratch, which, even with the compiler cache enabled, takes forever. Honestly, Buildroot is the principal reason that I upgraded to a Threadripper 3970X during this project.
There are many layers in the official Yocto repos. Layers can be licensed and distributed separately, so many companies maintain their own “Yocto layers” (e.g., meta-atmel), and the big players actually maintain their own distribution that they build with Yocto. TI’s ProcessorSDK is built using their Arago Project infrastructure, which is built on top of Yocto. The same goes for ST’s OpenSTLinux Distribution. Even though Yocto distributors make heavy use of Google’s repo tool, getting a set of all the layers necessary to build an image can be tedious, and it’s not uncommon for me to run into strange bugs that occur when different vendors’ layers collide.
But here’s where Yocto falls flat for me as a hardware person: it has absolutely no interest in helping you build images for the shiny new custom board you just made. It is not a tool for quickly hacking together a kernel/U-Boot/rootfs during the early stages of prototyping (say, during this entire blog project). It wasn’t designed for that, so architectural decisions they made ensure it will never be that. It’s written in a very software-engineery way that values encapsulation, abstraction, and generality above all else. It’s not hard-coded to know anything, so you have to modify tons of recipes and create clunky file overlays whenever you want to do even the simplest stuff. It doesn’t know what DTS files are, so it doesn’t have a “quick trick” to compile Linux with a custom one. Even seemingly mundane things — like using menuconfig to modify your kernel’s config file and save that back somewhere so it doesn’t get wiped out — become ridiculous tasks. Just read through Section 1 of this Yocto guide to see what it takes to accomplish the equivalent of Buildroot’s make linux-savedefconfig((Alright, to be fair: many kernel recipes are set up with a hardcoded defconfig file inside the recipe folder itself, so you can often just manually copy over that file with a generated defconfig file from your kernel build directory — but this relies on your kernel recipe being set up this way)). Instead, if I plan on having to modify kernel configurations or DTS files, I usually resort to the nuclear option: copy the entire kernel somewhere else and then set the kernel recipe’s SRC_URI to that.
It may not seem like a big distinction when you’re getting started, but Yocto builds a Linux distribution, while Buildroot builds a system image. Yocto knows what each software component is and how those components depend on each other. As a result, Yocto can build a package feed for your platform, allowing you to remotely install and update software on your embedded product just as you would a desktop or server Linux instance. That’s why Yocto thinks of itself not as a Linux distribution, but as a tool to build Linux distributions. Whether you use that feature or not is a complicated decision — I think most embedded Linux engineers prefer to do whole-image updates at once to ensure there’s no chance of something screwy going on. But if you’re building a huge project with a 500 MB root filesystem, pushing images like that down the tube can eat through a lot of bandwidth (and annoy customers with “Downloading….” progress bars).
Allwinner F1C200s: a 400 MHz ARM9 SIP with 64 MB (or 32 MB for the F1C100s) of DDR SDRAM, packaged in an 88-pin QFN. Suitable for basic HMI applications with a parallel LCD interface, built-in audio codec, USB port, one SDIO interface, and little else.
Nuvoton NUC980: 300 MHz ARM9 SIP available in a variety of QFP packages and memory configurations. No RGB LCD controller, but has an oddly large number of USB ports and controls-friendly peripherals.
Rockchip RK3308: A quad-core 1.3 GHz Cortex-A35 that’s a much newer design than any of the other parts reviewed. Tailor-made for smart speakers, this part has enough peripherals to cover general embedded Linux work while being one of the easiest Rockchip parts to design around.
The Microchip, NXP, ST, and TI parts are what I would consider general-purpose MPUs: designed to drop into a wide variety of industrial and consumer connectivity, control, and graphical applications. They have 10/100 ethernet MACs (obviously requiring external PHYs to use), a parallel RGB LCD interface, a parallel camera sensor interface, two SDIO interfaces (typically one used for storage and the other for WiFi), and up to a dozen each of UARTs, SPI, I2C, and I2S interfaces. They often have extensive timers and a dozen or so ADC channels. These parts are also packaged in large BGAs that ball-out 100 or more I/O pins that enable you to build larger, more complicated systems.
On the other hand, the Allwinner and Rockchip parts are much more purpose-built for consumer goods — usually very specific consumer goods. With a built-in Ethernet PHY and a parallel and MIPI camera interface, the V3s is obviously designed as an IP camera. The F1C100s — a part with no Ethernet but with a hardware video decoder — is built for low-cost video playback applications. The A33 — with LVDS / MIPI display support, GPU acceleration, and no Ethernet — is for entry-level Android tablets. None of these parts have more than a couple UART, I2C, or SPI interfaces, and you might get a single ADC input and PWM channel on them, with no real timer resources available. But they all have built-in audio codecs — a feature not found anywhere else — along with hardware video decoding (and, in some cases, encoding). Unfortunately, with Allwinner, you always have to put a big asterisk by these hardware peripherals, since many of them will only work when using the old kernel that Allwinner distributes — along with proprietary media encoding/decoding libraries. Mainline Linux support will be discussed more for each part separately.
But, being Nuvoton, this chip has some (mostly good) weirdness up its sleeve. Unlike the other mainstream parts that were packaged in ~270 ball BGAs, the NUC980 comes in 216-pin, 128-pin, and even 64-pin QFP packages. I’ve never had issues hand-placing 0.8mm pitch BGAs, but there’s definitely a delight that comes from running Linux on something that looks like it could be a little Cortex-M microcontroller.
Another weird feature of this chip is that in addition to the 2 USB high-speed ports, there are 6 additional “host lite” ports that run at full speed (12 Mbps). Nuvoton says they’re designed to be used with cables shorter than 1m. My guess is that these are basically full-speed USB controllers that just use normal GPIO cells instead of fancy-schmancy analog-domain drivers with controlled output impedance, slew rate control, true differential inputs, and all that stuff.
Honestly, the only peripheral omission of note is the lack of a parallel RGB LCD controller. Nuvoton is clearly signaling that this part is designed for IoT gateway and industrial networked applications, not HMI. That’s unfortunate since a 300-MHz ARM9 is plenty capable of running basic GUIs. The biggest hurdle would be finding a place to stash a large GUI framework inside the limited SPI flash these devices usually boot from.
The NUC980 BSP seems to be built and documented for people who don’t know anything about embedded Linux development. The NUC980 Linux BSP User Manual assumes your main system is a Windows PC, and politely walks you through installing the “free” VMWare Player, creating a CentOS-based virtual machine, and configuring it with the missing packages necessary for cross-compilation.
Interestingly, the original version of NuWriter — the tool you’ll use to flash your image to your SPI flash chip using the USB bootloader of the chip — is a Windows application. They have a newer command-line utility that runs under Linux, but this should illustrate where these folks are coming from.
They have a custom version of Buildroot, but they also have an interesting BSP installer that will get you a prebuilt kernel, u-boot, and rootfs you can start using immediately if you’re just interested in writing applications. Nuvoton also includes small application examples for CAN, ALSA, SPI, I2C, UART, camera, and external memory bus, so if you’re new to embedded Linux, you won’t have to run all over the Internet as much, searching for spidev demo code, for example.
For seasoned Linux developers, things get a bit weird when you start pulling back the covers. Instead of using a Device Tree, they actually use old-school platform configuration data by default (though they provide a device tree file, and it’s relatively straightforward to configure Linux to just append the DTB blob to the kernel so you don’t have to rework all your bootloader stuff).
Here’s a big problem Nuvoton needs to fix: by default, Nuvoton’s BSP is set up to boot from an SPI flash chip with a simple initrd filesystem appended to the uImage that’s loaded into RAM. This is a sensible configuration for a production application, but it’s definitely a premature optimization that makes development challenging — any modifications you make to files will be wiped away on reboot (there’s nothing more exciting than watching sshd generate a new keypair on a 300 MHz ARM9 every time you reboot your board). Furthermore, I discovered that if the rootfs started getting “too big” Linux would fail to boot altogether.
Instead, the default configuration should store the rootfs on a proper flash filesystem (like YAFFS2), mounted read-write. Nuvoton doesn’t provide a separate Buildroot defconfig for this, and for beginners (heck, even for me), it’s challenging to switch the system over to this boot strategy, since it involves changing literally everything — the rootfs image that Buildroot generates, the USB flash tool’s configuration file, U-Boot’s bootcmd, and Linux’s Kconfig.
Instead, go straight to the source — when I had problems, I just filed issues on the GitHub repos for the respective tools I used (Linux, U-Boot, BuildRoot, NUC980 Flasher). Nuvoton engineer Yi-An Chen and I kind of had a thing for a while where I’d post an issue, go to bed, and when I’d wake up, he had fixed it and pushed his changes back into master. Finally, the time difference between the U.S. and China comes in handy!
These parts are built for low-cost AV playback and feature a 24-bit LCD interface (which can also be multiplexed to form an 18-bit LCD / 8-bit camera interface), built-in audio codec, and analog composite video in/out. There’s an H.264 video decoder that you’ll need to be able to use this chip for video playback. Just like with the A33, the F1C100s has some amazing multimedia hardware that’s bogged down by software issues with Allwinner — the company isn’t set up for typical Yocto/Buildroot-based open-source development. The parallel LCD interface and audio codec are the only two of these peripherals that have mainline Linux support; everything else only currently works with the proprietary Melis operating system Allwinner distributes, possibly an ancient 3.4-series kernel they have kicking around, along with their proprietary CedarX software (though there is an open-source effort that’s making good progress, and will likely end up supporting the F1C100s and F1C200s).
There may or may not be official dev boards from Allwinner, but most people use the $7.90 Lichee Pi Nano as a reference design. This is set up to boot from SPI NOR flash and directly attach to a TFT via the standard 40-pin FPC pinouts used by low-cost parallel RGB LCDs.
Once you do get everything set up, you’ll end up with a bog-standard mainline Linux kernel with typical Device Tree support. I set up my Buildroot tree to generate a YAFFS2 filesystem targeting an SPI NOR flash chip.
My issue is that none of these signals are particularly high-speed so there’s no reason to run them over proprietary connectors. Sure, it’s a hassle to breadboard something like a 24-bit RGB LCD bus, but it’s way better than having to design custom adapter boards to convert the 0.5mm-pitch FPC connection to whatever your actual display uses.
These classic dev board designs are aptly named “evaluation kits” instead of “development platforms.” They end up serving more as a demonstration that lets you prototype an idea for a product — but when it comes time to actually design the hardware, you have to make so many component swaps that your custom board is no longer compatible with the DTS / drivers you used on the evaluation kit. I’m really not a fan of these (that’s one of the main reasons I designed a bunch of breakout boards for all these chips).
Microchip selectively-depopulated the chip in such a way that you can escape almost all I/O signals on the top layer. There are also large voids in the interior area which gives ample room for capacitor placement without worrying about bumping into vias. I had a student begging me to let him lay out a BGA-based embedded Linux board, and this processor provided a gentle introduction.
The victim of this haphazard pin-muxing is the LCD and CSI interfaces, which have overlapping pins. And Microchip didn’t even do it in a crafty way like the F1C100s where you could still run an LCD (albeit in 16-bit mode) with an 8-bit camera sensor attached.
This is a new part that hasn’t made its way into the main Buildroot branch yet, but I grabbed the defconfig and board folder from this Buildroot-AT91 branch. They’re using the linux4sam 4.4 kernel, but there’s also mainline Linux support for the processor, too.
The EVK DTS has pre-configurated pinmux schemes for RGB565, RGB666, and RGB888 parallel LCD interfaces, so you can easily switch over to whichever you’re using. The default timings were reasonable; I didn’t have to do any configuration to interface the chip with a standard 5″ 800×480 TFT. I threw Qt 5 plus all the demos on an SD card, plugged in a USB mouse to the third USB port, and I was off to the races. Qt Quick / QML is perfectly useable on this platform, though you’re going to run into performance issues if you start plotting a lot of signals. I also