My favourite one of this kind is the Rockchip RK808 RTC, where the engineers thought that November had 31 days, needing a Linux kernel patch to this day that translates between Gregorian and Rockchip calendars (which are gradually diverging over time).
It's always November, isn't it? I've once made a log collection system that had a map of month names to months (had to create it because Go date package didn't support that specific abbreviation for month names).
As you might've guessed, it lacked November, but no one noticed for 4+ months, and I've left the company since. It created a local meme #nolognovember and even got to the public (it was in Russia: https://pikabu.ru/story/no_log_november_10441606)
That hardware real time clocks keep time in date and time drives me batty. And no one does the right thing which is just a 64 bit counter counting 32khz ticks. Then use canned tested code to convert that to butt scratching monkey time.
Story my old boss designed an STD Bus RTC card in 1978 or something. Kept time in YY:MM:DD HH:MM:SS 1/60 sec. And was battery backed. With shadow registers that latched the time. Couple of years later redesigned it as a 32 bit seconds counter. With a 32khz sub seconds counter. Plus a 48 bit offset register. What was a whole card was now a couple of 4000 series IC's on the processor card. He wrote 400 bytes of Z80 assembly to convert that to date and time. He said was tricky to get right but once done was done.
> the characters ’n’ and ‘o’ differ by only one bit; an unpredictable error that sets that bit could change GenuineIntel to GenuineIotel.
On a QWERTY keyboard, the O key is also next to the I key. It's also possible someone accidentally fat-fingered "GenuineIontel" , noticed something was off, and moved their cursor between the "o" and "n", and accidentally hit Delete instead of Backspace.
Maybe an unlikely set of circumstances, but I imagine a random bit flip caused at the hardware-level is rare since it might cause other problems, if something more important was bit-flipped.
I like this theory - I can totally imagine some big spreadsheet of processor model names where someone copy/pastes the model name to some janky firmware-programming utility running on an off-the-shelf mini PC on the manufacturing floor, implemented as a "temporary fix" 5 years ago, every time the production line changes CPU model.
The GenuineIotel thing fascinates me because I can't fully grasp how it could happen. I can imagine a physical defect causing a permanent wrong-bit in a specific piece of silicon, but it seems more widespread than that. Perhaps some kind of bug in the logic synthesis process?
I am reminded of the old AMD CPUs with "unlockable" extra cores, which would when unlocked change the model name to something unusual.
"GenuineIotel" is definitely odd, but difficult to research more about; I suspect these CPUs might actually end up being collector's items sometime in the future.
because inserting no-op instructions after them prevents the issue.
Some of the 386 bugs described there sound to me like the classic kind of "multiple different subsystems interact in the wrong way" issue that can slip through the testing process and get into hardware, like this one:
> For example, there was one bug that manifested itself in incorrect instruction decoding if a conditional branch instruction had just the right sequence of taken/not-taken history, and the branch instruction was followed immediately by a selector load, and one of the first two instructions at the destination of the branch was itself a jump, call, or return.
Even if you write up a comprehensive test plan for the branch predictor, and for selector loads, and so on, it might easily not include that particular corner case. And pre silicon testing is expensive and slow, which also limits how much of it you can do.
This sort of bug, especially in and around pipelines are always hard to find. In chips I've built we've had one guy who built a system that would build random instruction streams to try and trigger as many as we possibly could
Yeah, I think random-instruction-sequence testing is a pretty good approach to try to find the problems you didn't think of up front. I wrote a very simple tool for this years ago to help flush out bugs in QEMU: https://gitlab.com/pm215/risu
Though the bugs we were looking to catch there were definitely not the multiple-interacting-subsystems type, and more just the "corner cases in input data values in floating point instructions" variety.
> To me, this issue doesn’t seem as embarrassing as Intel’s wrong CPUIDs. Pipelined CPUs are hard to build
I disagree. Misspelling a name in the CPUID is kind of easy to do, somewhat awkward to test (in a non-tautological way), and pretty easy to work around.
Having `mul ...; lw ...;` fail show that they've done very little testing of the chip. Any basic randomised pipeline testing would hit that trivial case.
Essentially all CPUs are pipelined today. In-order pipelined CPU execution semantics are not particularly hard to test. Even some open source testing systems could detect this bug, e.g. TestRig or RISCV-DV.
Also one of my favourite kernel patch messages: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
reply