Check the documents in references, it has some more reasoning than just single line.
1) Simple control flow is obviously right. lots of everyday refactoring of every programmer is simplifying control flow.
Recursion is a bit niche for embedded since you usually have limited stack space and proving your stack will never overflow from call graph is handy analysis, and recursion tends to fuck it up.
For desktop PCs you can get away with it, but always make it obvious that it will terminate.
2) Again in embedded programming you'll often have some loop where you write some command into device register and wait in loop for flag that it finished.
Naive code would just do infinite loop. But really you should be anal and have all these loop bounded by timeout or retry count.
It never fails in dev but then your sensor dies in field and your whole board should get stuck because of that? No thanks.
3) No dynamic allocation is overkill for many people. But you can approach it like this - you need to reliably handle cost of allocation and failure of allocation (exhaustion of resources and such).
Even for any performance sensitive software it's mostly handy rule. Allocate all resources ahead of time, fail early and have predictable rest.
4) is bullshit, check Carmack on Inline Code and I do believe he's right. Long functions can actually make things cleaner and simpler to navigate and understand.
5) I'm not settled on excessively asserting. It's nice to have invariances explicitly stayed out, but when it's too much?
>Assertions must be side-effect free.
God I wish C compilers would check this.
>9) ... function pointers are not permitted
RIP dynamic dispatch and interfaces. Obviously I disagree with this.