I received responses from both Jeff Magee and Jeff Kramer concerning the question raised in the previous entry. Both agreed that the two specifications are semantically identical and that the non-optimized LTS can be converted to the optimized version by asking LTSA to minimize the non-optimized version. To see this for yourself, fire up LTSA and enter the first FSP specification:

const N = 1 SUM = (in[a:0..N][b:0..N] -> out[a+b] -> SUM).

As we saw last time, this spec produces this LTS:

Now click the minimize button, as shown below.

LTSA will minimize the LTS and it will now look exactly like the LTS produced by this specification (see previous entry for details):

const N = 1 SUM = (in[a:0..N][b:0..N] -> TOTAL[a+b]), TOTAL[s:0..2*N] = (out[s] -> SUM).

Now for the responses. Here is what Jeff Magee had to say:

`Hi Ken,`

`The compiler does not guarantee to produce minimal LTSs. In the second case as the states are declared explicitly, there is only one state generated for both 0+1 and 1+0. In the first case, a state for each state is implicitly generated. Minimizing the first LTS will return the second, so as you note, they are exactly equivalent.`

`Cheers`

`Jeff`

And, here is what Jeff Kramer said in his reply:

`Hi Ken,`

`This is just the way that the FSP is compiled and in the first case, without knowledge of the semantics of +, a+b might not be the same as b+a, whereas in the second case the state is first computed and becomes s.`

`The first case can simply be minimised to produce the second as they are observationally equivalent.`

`Best wishes`

`jeff`

In interpreting their response, the second FSP is explicitly defining a destination state as represented by the TOTAL process. So, LTSA computes the explicit state that will be reached when control transfers to TOTAL and is able to determine at compile time that the in[0][1] and in[1][0] actions both go to the same state. In the first specification, it makes no assumptions about the semantics of the addition operation and so conservatively generates different states for in[0][1] and in[1][0] leading to the non-optimal LTS that we saw above. Once we invoke the minimize operation, it then looks at the state machine more closely and realizes that two of the states in the non-optimal LTS have the same semantic meaning and collapses those states into a single state producing the optimzed (or rather, minimal) LTS.

Hope this helps and thanks very much to the student who asked the question that led to this discussion!