>>724630874
Possibly, I'm not an Elite Dangerous expert. It could also be related to badly optimized AI behaviors (the more enemies you kill, the less work there is ticking the AI behaviors of various agents, if you're doing a whole lot of agent ticking especially when those agents aren't on-screen you can quickly explode your CPU time).
I can tell you that this effect is why you can get excellent FPS with tessellation turned on (turning every flat quad into a dense mesh of totally redundant tris) but the un-retopo'd dragon from Forspoken tanks the FPS to single digits. Because the dragon is trying to compute smooth gradient bone deformation for those millions of verts, rather than just passing them off to the fragment shader without asking questions.
I've posted this pic before from when I was doing some profiling of my own game, where I demonstrate that I go from rendering 6 million tris to slightly under 4 million tris and LOSE 10FPS, and the reason for that is because the draw call count quadruples from 400 to 1600 (draw calls are CPU bound). And this is on a GTX1070 for fuck's sake. And that's with only 170MB (roughly) of VRAM utilized; if you start hitting the swap point on your VRAM limit, god help you.
The point is, GPUs are architecture designed to rasterize millions of triangles quickly. They are very, very good at doing it. Most major framerate struggles come from situations where you aren't allowing them to do that; either because you are making them sit around and wait for the CPU to send instructions, or because you are forcing one rasterizer to wait on a previous rasterizer (i.e. lots of overdraw).