"Framebuffer" as a term itself is not a problematic thing, in general. The original Atlus 68000 hardware used in DDP has a framebuffer for the sprites. However, his entire system uses a framebuffer, including the backgrounds, and not just the sprites. As designed, based on his paper, it sounds to me like the backgrounds will be delayed one frame compared to original hardware. The impact is minimal, but it means the background will appear to be shifted vertically a little bit, and things like the disappearance of the stage 3 sky ships will happen one frame out of sync. This can be remedied by adding an additional frame of latency to the sprite layer.
The original DDP hardware's sprites are effectively delayed by two frames: the sprite description list is double-buffered, and the sprite framebuffer is double buffered. In contrast, the background layout data is not double buffered, and is simply updated during vblank. Then, during active display, pixel data is generated on the fly through repetitive fetches from background VRAM and background character memory. The Atlus hardware Cave used is so simple, that one IC covers one layer. More backgrounds were added by simply adding more background ICs, and totally separate memory buses. That's two buses per background layer! (even though the VRAM is switched to the 68000 CPU bus during access, it is mastered independently by the background IC during active display).
To implement these old systems accurately, imo, we should be creating verilog models of each individual IC. I believe simplifications in order to facilitate connections with other ICs, or to reduce memory buses, should be done by helper components or wrappers.
Again, minimal problems, but it goes against the grain for something that is being sold with promises of higher accuracy. it sounds like a very simple problem, but to do it "right" the rabbit hole can go a bit deeper than expected.