Easy at the beginning when the pack can be fully discharged and then charged as the measured coulombs should equal the packs capacity. However, we rarely ever fully discharge our bikes and sometimes we don't wait for a full charge. This, along with self discharge and other factors that coulomb counting can't account for will ultimately throw uncertainty into the count. There are many "enhanced" coulomb counting algorithms intended to close this gap but there's still much room for improvement. My only point is that perhaps Alta's engineers figured out how to do this more accurately than anyone else. A more accurate estimate of the available energy would allow them to push certain operational limits that otherwise would require a more conservative approach. I think that would be worth investing in if it was applicable to your business.
A. The Alta engineers didn’t make some masterful breakthrough in determining SOC% or degradation
B. The task isn’t horribly difficult:
If the resting cell voltage of 2.5v = 0% and 4.2v = 100% (NOTE: the SOC% to resting voltage relationship is not linear between those two points... this is an exercise), then when charged from 0% to 100%, it should take 5.8kWh (minus whatever small impedance / heating losses).
So, charging from 50% to 100% should take 2.9kWh... hence, something less than that should be considered a capacity loss, or temperature related loss.
It just doesn’t take much programming to calculate from a simple table of published data of:
1. Cell performance at X temperature
2. Measured resting state voltage to % SOC
3. Degradation based on coulomb counting over time
Probably the most basic part of determining degradation is not “chasing the needles” as we call it in aviation. There must be a sanity check. The batteries are presumably assumed to be 100% calculated capacity on day 1, and I’d bet that they are discharged and charged at least once to verify the battery performance and capacity (or maybe not after building a bunch of them, but that is putting a lot of faith in your vendor for the cells).
If, after some period of time, the cells show Y amount of degradation through some rudimentary metrics of cell temperature and coulomb counting, then they can’t be less degraded the next time. Degradation is permanent loss of capacity. There needs to be some logical averaging of data over time, within some threshold of expected performance (which the cell manufacturer will provide in great detail).
I think I would collect time with the cells above 40C, 45C, 50C, 55C, etc, as well as overall time. Each hotter cell temperature gets a higher value for calculated degradation.
The same table could be used with SOC% - time at 80%, 90%, 100%
The sanity check degradation table might look like this:
Time ———————-> T-0 +1 +2 +3 +4
Normal degradation -> 0% 1% 2% 3% 4%
Agrravting factors —-> Add additional calculated degradation for:
Heat ———————-> Time at 70C = 1% additional degradation per unit of time
——————————> Time at 60C = 0.5%
——————————> Time at 50C = 0.2%
High or low SOC%—-> Time at 100% = 1% additional degradation per unit of time
——————————> Time at 90% = 0.5%
——————————> Time at 80% = 0.2%
——-———————-> Time at 70% = 0.1%
——————————> Time at 10% = 0.1%
——————————> Time at 0% = 0.5%
So, the sanity check is a table of expected degradation over time. The primary variables to that table are the values of time spent in extreme heat or high SOC%, but there could be others, like time spent at extremely low SOC% (below 5-10%). That could be compared to actual coulomb counting to arrive at an expected degradation value.
I can tell you that as the cells get seriously degraded, the expected and calculated data might go wonky!