From: metro Chicago, Illinois, USA
Another recent (private) Dev Forum post:
Apart from that, I will see about implementing that proposed Lua EE "spell checker", also crafting a testing tool to vet proper Lua EE operation.
This effort is going well.
Quite well indeed. By the time I am finished, maybe by tomorrow, we should have an automated tool to vet proper Lua EE operation -- most aspects, if not quite all -- across several turns of test scenario play.
And it's done!
rober@Rob10rto /cygdrive/c/Games/Matrix Games/Middle East/middle_east
< [DEBUG ID 10] (lua.cpp, line 423, l_log()) 2.9285714285714 average_morale ()
> [DEBUG ID 10] (lua.cpp, line 423, l_log()) 2.8571428571429 average_morale ()
The command '~/cslint/csluaeechk'
Launches a special LuaEECheck1.scn in automated test mode.
The scenario (an adaptation of Thamad_1956.scn) runs through its three turns.
As each Lua EE function is called, its value or effect is logged to lua.log
When the test scenario finishes, lua.log is stripped of its date/time stamps.
The filtered output is diff'ed against a known-good, vetted reference output file, ~/cslint/csluaeechk.ref.
In the above csluaeechk sample output, you will note the diff's between the reference and last-run average_morale() values. It's nothing to worry about; it's an artifact of the testing methodology. No, what is important is the absence of any other diffs -- an indication that the latest output compares well with the known-good reference output, that the current Lua EE is operating properly.
We now have a handy-dandy tool for running automated Lua EE QA checks whenever necessary, after significant code changes (that might impact the EE) for instance.
And if it's not clear: the Lua EE has been vetted (again) for CSME 2.0!