Ok so I tried something a bit different.
Adding a 1Meg to the LED resistor didn't make much of a difference to the operating point. I then removed the shunt regulator board and hooked up the CCS boards directly to the B+. Instead of a 1Meg, I paralleled with the 18k plate resistors that were removed. I now have all LEDs shining bright with a healthy 1.6V on each. 1V drop across R1 which should equal about 4mA. Raw B+ is sitting around 115V with ~40mA being drawn from the supply. Voltage on the plates is sitting around 74V now. If my math is right, this is not particularly efficient as each LED path is drawing more current than the triode! I would need to use just the right value to bias the LEDs to a lower current draw but still maintain a 1.6V drop on both.
Apologies if I seem like a nuisance customer here. I work in electronics myself and I realize how much of a nuisance it can be when you have customers that know just enough to be dangerous, and how much time you might spend to satisfy their whims
My original curiosity with regard to the bias current needed to be satisfied though and I don't expect you guys to indulge me if you'd rather not deal with tweaks that aren't in the manual.
But I do wonder if there's more benefit (measurably or sonically or otherwise) to optimizing for more bias current vs using the shunt regulator. Obviously the transformer can only deliver so much current (53mA according to the label) and I know a shunt regulator will burn a fair bit of juice itself so both features might not be possible for this design. My understanding of triodes is that more current = more linearity, while I know that a shunt power supply can offer the benefit of lower noise and lower impedance to ground. So I could try to listen both ways and decide for myself, but I figured I'd consult the Doc first and see if you guys maybe explored this route during the R&D process and verify that I'm not just treading old territory that you guys have already passed by.