Compression Ratio and Boost Query

I am in the process of a 318 LA budget turbocharged build, and there seems to be a lot of common info suggesting limits of around 6psi boost for an 8.5:1 static compression engine.

I am aware of how timing events affect dynamic compression, and how all things being equal, these principals typically apply to forced induction as well. Anyhow.. I have tried most of the formulas and seen most of the common (and uncommon) charts/graphs.

My question (and what they don’t seems to explain) is this:

I had a 1191 4G63T DSM (Talon/Laser/Eclipse) that ran 15psi on 91 octane with zero signs of detonation. It had no tune, and was bone stock other than a 16G ported and clipped turbo upgrade, along with the cylinder head being decked .0015”. My guess is the decking pushed the factory 7.8:1 compression up to around 8:1. I understand the ECM was likely pulling timing and the heads were fairly efficient aluminum castings. However, that alone doesn’t explain why most ‘boost compression’ charts say that would be a ‘race gas’ engine.

Why can’t I run 15 psi (max) on a properly prepped/tuned 318 with 8.5:1 compression?

Furthermore: If a 6psi limit is true, try finding a turbocharger compressor that maps efficient at such low boost pressures and engine RPMS that meets the potential max CFM demands of a small V8.

Anyone have real world experience boosting a 318 that can weigh in?

Thanks,
I had a similar question about boost on pump gas. Here is what an engineer friend wrote up for me. It makes sense now. And trust @TT5.9mag advice. He has been helping me as well.

The simplified version is to use the ideal gas law PV=nRT to calculate your starting molar mass "n" by translating to (PV)/(RT) = n. P = Pressure, V = volume, R = gas constant, T = Temperature. Starting volume is cylinder displacement, starting pressure would be 90 kpa for naturally aspirated and 90 kPa + (boost pressure*0.90) for pressure charged. Estimating based on expected volumetric efficiency (90%). Temperature will be expected intake charge temperatures. 35C for N/A and 60C for pressure charged. You can then calculate the molar mass at bottom dead center / cylinder filling.

You know the compression ratio, so you know V1 (volume 1) and V2 (compressed volume 2). So you can use the equation (P1xV1)/V2 = P2. To find your compressed cylinder pressure. Once you have P2, you can again use PV=nRT to calculate your compressed T2 temperature. (P2 xV2)/(nxR) = T. Now that is purely compression based temperature changes, but for spark ignited engines, the ignition is before TDC and peak cylinder pressure will rise well above the static compression ratio pressure rise.

Which is where "late" ignition comes into play. Where you delay the ignition of a pressure charged engine to limit the maximum cylinder pressure (Pmax), which limits your maximum cylinder temperature, which directly impacts the propensity of knock. This is also done on naturally aspirated engines, but not to the same extent. For example, an engine with 10:1 CR runs a maximum cylinder pressure of around 60 Bar. The another with 8.5:1 CR runs nearly the identical maximum cylinder pressure of 60 Bar, but the angle of Pmax is much later in the cycle, the area under the curve is much larger, so it generates more power.

The lower 8.5CR reduces the initial cylinder pressure / temperature of compression, which limits the propensity of knock and allows late ignition. The trade off is a loss in thermal efficiency due to the loss in expansion ratio. Which requires additional air mass flow (more boost pressure) to make up the difference again. That trade-off shows up in brake specific fuel consumption which will be much higher.