Type safe Eiffel (4, chapters "Generic constraints based on conformance" and "Validation with local analysis" included)
- Tags:
- catcall
- type safety
From the beginning of Eiffel the language has been characterized as strongly typed. But since its beginning there has been a hole in the type system which has not yet been fixed up to now (the so called catcall problem).
What is the hole?
The existence of the hole can be demonstrated with very simple examples. There is a class COMPARABLE defining the basic comparison operators "<", "<=", ">", ">=". The class COMPARABLE is an ancestor of types like INTEGER, REAL, STRING, ..., i.e. all types which would be characterized by a mathematician as having an order-relation.
The code snippet
is legal Eiffel, but produces a catcall with the expression "i < s", because it tries to compare an object of type INTEGER with an object of type STRING. Unfortunately the question "Is 1 less than "Hello world."" has no answer and the runtime has to react with an exception.
What happened here? The class COMPARABLE is defined like
Another example:
The array attached to "aa" is still an ARRAY[STRING]. But since the static type of "aa" is ARRAY[ANY], the assignment "aa[2]:=1" is legal Eiffel. Putting an INTEGER into a place, where a STRING is expected can only result in a runtime exception.
ECMA Eiffel (like all its predecessors) says an ARRAY[STRING] "is a" ARRAY[ANY]. But it is **not**. Into an ARRAY[STRING] you can put only strings or something conformant to strings, into an ARRAY[ANY] you can put anything.
Why has it not yet been solved?
To the inventor of Eiffel the problem has been known for a long time. In the book "Object Oriented SW Construction" (OOSC2, 1992) Bertrand Meyer analyzed and explained the problem. Since then, some solutions have been proposed, but none of them has been satisfying.
Surely some bright people have tried to solve the problem. If bright people have not resolved the problem for more than 15 years, does that mean that the problem has no solution?
But the problem has solutions. In OOSC2 Bertrand Meyer explained, that catcalls only happen when polymorphic attach (attach a descendant to an ancestor, i.e. the "is_a" relation) and covariant redefinition (e.g. arguments of type "like Current) come together.
So if you forbid polymorphic attachs or covariant redefinitions you have a solution which makes catcalls impossible.
Unfortunately this is not a satisfactory solution. The former would rob Eiffel of one of the most powerful mechanisms in object orientation (dynamic bind), the latter one would make classes like COMPARABLE (i.e. partial implementations) impossible.
Therefore a satisfactory solution has to retain polymorphy and covariance.
But neither polymorhy nor covariance are all-or-nothing constraints. The boil down to a lot of explicit or implicit constraints. These constraints define the space, where a solution has been searched for. Since no satisfactory solution (a solution which does not violate the constraints) has been found up to now, it seems that the solution space which the constraints define might be empty.
It might be like a student of mathematics who tries to solve a problem with 4 equations and 3 unknowns where each equation stands for a constraint. Using 3 of the constraints he comes up with a solution. Unfortunately the solution does not satisfy the 4th constraint.
Hypothesis: There are too many or too strong constraints (explicit or implicit). Finding a solution implies modifying (i.e. weakening) or getting rid of some constraints.
What can we do? I propose to make all the constraints explicit and discuss them. Let us formulate the constraints as fine grained as possible in order to find an acceptable modification to the constraints, i.e. let us scrutinze the potential solution space with the goal to find a non empty solution space.
The definition of the solution space will require some decisions. Once having a non empty solution space at least one or possibly more solutions will show up (hopefully).
From my analysis up to now the following constraints are used to find the problem. They will be discussed in the following sections in order to find out in which manner we can modify them.
Constraints:
- Universal conformance (UC)
- Feature rich ANY (FRA)
- Covariance (COV)
- Backward compatibility (BC)
- Promiscuous generic conformance (PGC)
- Generic constraints based on conformance (GCC)
- Validation with local analysis (VLA)
- ...
Maybe the list will become longer and/or more fine grained during thediscussion.
Analysis of the constraints
Universal conformance (UC)
Universal conformance means that all types conform to ANY. The type graphhas a unique top. Any object in your system can be attached to an entity oftype ANY.
This can lead to some holes in the type system together with features ofANY which have implicit covariant signatures like is_equal and copy with anargument of type "like Current". We can demonstrate the problem with aslight modification of one of the above examples
One of the arising questions is: Is it really necessary that all thingsconform to ANY?
What is the advantage and disadvantage of having universal conformance?
Java has universal conformance (class Object) and C++ has no universalconformance. Smarteiffel has chosen liberal way. You can have conformanceto ANY or not. It is up to the designer.
Discussion of UC:
UC gives us the possibility of having just an object. You can designclasses which store any type of object. You can store it and move it around(i.e. its reference).
As soon as you retrieve it, you cannot do a lot with it unless you find outwhat type of object it is. With an object test you have to "downcast" theobject to do something meaningful with it.
Having UC is some kind of convenience. If you don't have UC you can alwaysgive a group of types a common conformant ancestor type to store them andmove them around. This has the advantage of being able to call some commonfeatures of this group of types and assuring that only conformantdescendants of the base type can be stored in a container class.
The opponents of UC sometimes argue that using ANY is just lack ofdesign. If want to treat a group of types in a common way, design someconformant base type for them.
Then there is the argument of beauty. Since its beginning Eiffel had aunique top and bottom of the type system: ANY and NONE. This is verypleasing for anybody which looks at Eiffel with a mathematical eye.
Another cause for UC is the possibility to implement system routines likeprint(a:ANY) which prints a representation of any object. Without UC aroutine like print is not possible which would be some loss of convenience.
To summarize, neither the case for nor the case against universalconformance has very strong arguments. UC does not seem to be a principleat the heart of the Eiffel language. If it helps (which might be the moreinteresting question), UC can be thrown over board.
By the way: The ECMA committee has already given up universalconformance. The type
Feature rich ANY (FRA)
All classes in Eiffel inherit the features of ANY. These are a buch offeatures: type, is_equal, copy, twin, once_manager, io, is_deep_equal,deep_win, do_nothing, default_rescue, ... just to mention a some of them(in EiffelStudio 6.3 the class ANY has 29 public features).
These features are imposed on all classes and therefore on all types,independently whether they need them or not. And if you study some reallife classes they usually don't need all features of ANY or just one or twoof them. So for somebody looking for some minimal kernel of the language,this is a little bit disturbing.
All who know C are familiar with the famous "Hello world"-example. Thedesigners of C made the language core minimal, so that even for printing asimple string to standard output you have to request some library. Without"#include <stdio.h> you cannot print anything. Fullstop. The basicphilosophy behind: You only pay for what you ask. If you don't ask forstdio, you won't pay for it.
The philosophy behind Eiffels ANY seem to be just the opposite: You don'thave to ask, you just have to pay. It is already included in the basicsetup. Some might argue that unused features will be squeezed out finallyby the optimizer and in the final binary you won't pay for unusedfeatures. However in the whole development process you have them in.
Let us smell the taste of the different views a little bit. A hello-worldprogram with FRA reads like
Without FRA you would encounter something like
Too noisy? I don't know. But up to now it is a matter of taste.
You might ask, what has this to do with type safety. Admittedly the examplewith printing "Hello world." has nothing to do with type safety. It servedto smell the taste about FRA and the opposite (feature poor ANY).
The thing gets more interesting if we look at features like is_equal andcopy. Since they have an argument anchored to Current "is_equal ( other: likeCurrent): BOOLEAN", they imply covariant redefinition in descendants. I.e. anyclass, which redefines is_equal will redefine the argumentcovariantly. INTEGER, REAL, STRING, ... all have a different definition ofis_equal and are expecting an argument of the same type (or conformant).
And this together with universal conformance (UC) hurts. UC implies thepossibility of polymorphic attachment of an object of any type to an entity oftype ANY and FRA makes it possible, that is_equal can be called with anargument of any type as long as it is called on an entity of type ANY but isexpecting a specific one. And these two facts open up a possible catcall withis_equal.
The current version of the ECMA Eiffel specification has tried to resolvethe issue by giving is_equal another signature. The spec. defined
This is possible, but it has some ugly consequences. Look at the classINTEGER. Would you be pleased with the class text
and the code snippet
I would say that it is a possibility to change all arguments of type "likeCurrent" in features of ANY to "ANY" and not redefining them covariantly indescendants. With that, the catcalls with is_equal can be avoided.
But what about copy? What would you say to "i.copy("Hello world")"?
Bertrand Meyer some time ago stated that the ECMA committee has oscillatedabout the signature of is_equal, but now the definition of is_equal (other:ANY) is firm. Some weeks later another member of the committee stated thatis_equal(other: like Current) is back. Maybe in the next meeting they... I don't know.
But wait a moment. We are discussing the constraint "feature rich ANY"here. Therefore the real question is: Is it necessary that the featuresis_equal, copy etc. are in ANY?
What would be the consequence of e.g. putting is_equal and copy into a classEQUALITY_FEATURES, is_deep_equal and deep_cloned into DEEP_FEATURES, io andprint into STANDARD_INPUT_OUTPUT?
The most obvious consequence is that you cannot any longer compare any twoobjects by the object comparison operator "~", because "~" relies on is_equal.
Objects of many types need not necessarily the feature is_equal. But somesurely need. All numbers need to be checkable for equality. E.g. the classINTEGER would read
Without inheriting from EQUALITY_FEATURES no assignment of expanded typeswould be possible. Expanded types have copy semantics. Therefore the semanticeffect of assignment is based on the semantics of the copy feature. Anyexpanded class not inheriting the equality features would not be assignable(as opposed to reference type which are always assignable without havingcopy).
Removing FRA will give us "You only get what you ask for".
Besides the additional effort it gives me some new freedom of design. WithoutFRA I can design a classes which cannot be compared nor copied. Sometimes thismight be my design intention. Maybe I want to say to you: Just create objectsof this class, modify them with the features I have designed for it, but donot ask for equality nor for copy. The class has not been designed forthat. Why not?
The constraint FRA can be lifted, if it helps. The combination of UC, FRA andthe anchored signature of is_equal and copy cannot all be kept.
Covariance (COV)
Since catcalls can only happen when polymorphic attach meets covariantredefinition some might propose to get rid of covariance.
Bertrand and Emmanuel have written a little paper with the title "The worldis covariant. Is it safe?".
The title already reveals that they don't consider removing covariance anoption. They try to illustrate it with a CUSTOMER which can be served any typeof BEVERAGE (i.e. CUSTOMER has the feature "serve(b:BEVERAGE)". A BEVERAGE iseither a SOFT_DRINK or ALCOHOL. MINORs are CUSTOMERs, but they shall have themore restrictive feature "serve(b:SOFT_DRINK)".
They say implicitely that modelling the world usually involves covariance. Amore specific type might require some more specific argument in its features.
The problem arises if a MINOR is considered as a CUSTOMER (polymorphic attach)and served an alcoholic beverage. That is the realworld correspondance of acatcall which at least parents won't like.
Despite the potential catcall, the example is a strong case for covariance:Its modelling power.
Another case for covariance is its practical implications.
Let us look into the class COMPARABLE.
Note that in the class COMPARABLE only the feature "<" is deferred. The others"<=", ">", ">=" are effective and based on "<" and "~". The class COMPARABLEis therefore also called a partial implementation. A descendant only has toeffect (i.e. to implement) the feature "<" and will get the other ones forfree and consistent with a mathematical order relation.
Or look into the class NUMERIC.
NUMERIC does not provide a partial implementation like COMPARABLE. But itspecifies contracts which the descendant has to obey to. It specifiese.g. that the operations "+" and "-" have to satisfy some consistencyrelation. We can say that NUMERIC models a certain behaviour which INTEGER,REAL, COMPLEX, MATRIX, etc. have to satisfy.
Now imagine the classes COMPARABLE and NUMERIC without its implicitcovariance. Does it make sense to specify
What is the sum of an INTEGER with a MATRIX?
No! We definitely want
So the case for covariance can be made with modelling power, partialimplementations and behaviour classes. Covariance is a very powerfulmechanism. Therefore nobody wants to get rid of covariance.
But: Is it an all-or-nothing decision? Do we want to be able to redefinecovariantly any feature in any class?
Maybe we can find a restricted form of covariance which gives us theadvantages of covariance without its dangers.
Remember: Covariance itself is not the problem, only covariance combined withpolymorphic attach and polymorphically calling the feature with covariantlyredefined arguments.
So it could be possible to allow covariant redefinitions only when they cannotdo any harm, i.e. instead of full covariance only a restricted form ofcovariance.
An attempt: Allow covariant redefinitions only on features which are notinherited conformantly.
As a consequence classes like COMPARABLE, NUMERIC can only be inheritednon-conformantly.
What other restrictions of covariance can you imagine?
Backward compatibility (BC)
The changes of the Eiffel language have been made up to now in a veryconservative and prudent manner. A compiler shall be able to support an oldand a new version of the language at the same time. Changes in the languageshall not break existing code.
This is in general a wise way to move forward. A code breaking change isinconvenient to the users.
Therefore the fix of the hole in the type system is expected not to breakexisting code. Is this possible?
A solution of the catcall issue has to disallow possible catcalls. Since inthe current version of Eiffel it is possible to code catcalls, the new versionof the language must reject some of the currently valid Eiffel code.
Therefore the requirement of complete backward compatibility is too strong. Itcannot be fulfilled.
But we don't want to give up backward compatibility in general. Well writtenEiffel code does not produce catcalls. But it might use Eiffel constructswhich are catcall prone even if in the specific case it does not produce acatcall.
Is it at least possible to make a change in the language which does not breakexisting code which does not produce catcalls?
Even this requirement might be to strong. The compiler has to analyze the codestatically. It cannot do all reasoning which humans can do. If we can provethat a specific code cannot produce catcalls it might be impossible for thecompiler to prove that. In that case, the compiler has to reject the code,i.e. break the code even if no actual catcall happen.
Therefore I propose to lift the requirement for backward compatibiliycompletely (at least to open up the solution space). A satisfactory solutionhas not been found for more than 15 years now. If we pose onto ourselves toomany constraints, we might not be able to come up with a good solution. Oncehaving a solution which is satisfactory without the backward compatibilityconstraint we can try to polish and improve it to make it as backwardcompatible as possible.
Promiscuous generic conformance (PGC)
Conformance is based on inheritance. A descendant conforms to its conformantparents, i.e. if "class D inherit B ..." is encountered, D conforms toB. Conformance is transitive. If A conforms to B and B conforms to C, then Aconforms to C.
This is not a complete description of conformance, but it captures the basicidea.
But Eiffel also has generic classes. There is a rule of conformance whichopens up another path. If we have a generic class CG[G] and a type D whichconforms to B, then CG[D] conforms to CG[B] as well.
In simple terms ARRAY[STRING] conforms to ARRAY[ANY]. I.e. for generic typesthere is not only the conformant ancestry of the base class which determinesconformance but also the ancestry of its actual generic parameters.
Does this make sense? We can answer diplomatically: Yes and no!
As long as I want to get an element of an ARRAY an ARRAY[STRING] returns me astring. Since STRING conforms to ANY I can attach an ARRAY[STRING] toARRAY[ANY] and the returned value is fine.
If I put something into ARRAY[ANY] (which is in reality ARRAY[STRING]), thecompiler won't complain. But the object attached is an ARRAY[STRING] whichdoes not like putting INTEGERs or BOOLEANs into it.
So yes, it is fine for all features not having arguments of formal generics ofthe surrounding class. And no, every use of a feature having arguments offormal generics of the surrounding class might produce a catcall.
Promiscuous generic conformance is part of the problem. We have to find a lessliberal form of generic conformance in order to make Eiffel type safe.
The most rigid solution would be to dissallow conformance based on conformanceof actual generics, i.e. an ARRAY[STRING] cannot be attached to an entity oftype ARRAY[ANY]. The question: Does this cut off useful patterns?
I personally have not yet encountered useful patterns which require attachingan ARRAY[STRING] to ARRAY[ANY]. But maybe you will protest and say "I use thispattern frequently to achieve XYZ". If you have a valid point, then we have tofind something less rigorous.
A less rigorous solution has already been proposed in the above mentionedpaper "The world is covariant. Is it safe?". It is based on the observationthat only features having arguments of formal generic type aredangerous. E.g. in the class ARRAY
the feature "item" will never produce catcalls, but the feature "put" can do.
If we attach an object of type ARRAY[STRING] to an entity a:ARRAY[ANY] wedon't want a.put(...) to be called, i.e. we want a type which has the featuresof ARRAY[ANY] without the dangerous ones. In other words we want ARRAY[ANY] to"forget" some of its features.
Syntactically it has been proposed to use ARRAY[variant ANY] to be thattype. ARRAY[variant ANY] has all features of ARRAY[ANY] except those whichhave arguments of formal generic type. With that it is not any problem toattach an object of type ARRAY[STRING] to an entity a:ARRAY[variant ANY],because you cannot call the offending feature a.put(...). Attaching an objectof type ARRAY[STRING] to an entity a:ARRAY[ANY] won't be allowed any longer.
What would you prefer? The simple more rigorous solution or the morecomplicated fine grained solution still allowing some generic conformance?
Is there a third (or forth, ... ) way?
Generic constraints based on conformance (GCC)
Eiffel has the nice feature of constraint genericity.
The class ARRAY[G] does not have any constraint. You can use any valid type Tto declare an ARRAY[T]. There is no need for ARRAY to know anything about thetype of the elements it contains. It just stores and retrieves the elementsbut does not call any features of the elements.
If you design a kind of a sorted sequence the requirement is different. Youwant to store the elements in order. You must be able to compare them, i.e. tocall a < b. Therefore you cannot accept any type as an actual generic for thesorted sequence. Instead you will specify
With that declaration Eiffel allows the type SORTED_SEQUENCE[T] only if Tconforms to COMPARABLE. SORTED_SEQUENCE[INTEGER], SORTED_SEQUENCE[REAL],SORTED_SEQUENCE[STRING] are valid ones, SORTED_SEQUENCE[COMPLEX] would beinvalid.
Since the actual generic T has to conform to COMPARABLE it is guaranteedthat SORTED_SEQUENCE can call all features of COMPARABLE on entities of itsformal generic type G.
This is consistent with the conformance rules: A formal generic typeconforms to itself and its constraint. No type except itself conforms to aformal generic.
But what happens if we write
The class T still has all features of COMPARABLE so it could be used in thetype SORTED_SEQUENCE[T]. But it cannot!
SORTED_SEQUENCE[T] is not a valid type, because T does not conform toCOMPARABLE.
Remember that COMPARABLE contains features which might producecatcalls. Therefore a prudent user inherits from COMPARABLE onlynon-conformant. But as a consequence he cannot use his type to put it intoa sorted sequence. How frustrating!
For that reason we have to discuss the basis to define constraints. Isconformance the right way? Why do we use conformance to define constraints?
a) It is convenient. We have this notion of conformance and it does thejob. So why not use it?
b) We might want to be able to use conformance in its basic form tomake assignments. E.g.
Ad a) This is not convincing. We could use a weaker notion than conformanceto specify constraints. It might not be difficult to find a way to allowthe non-conforming inheritance as well to satisfy the constraint.
Ad b) This is convincing if such a construct is really needed. I have notyet encountered a case, where I want the above assignment c:=e to beallowed. But if there are such useful patterns, we have to find a solutionfor that.
Anyhow: Usually defining a constraint with conformance is anoverspecification. If we define a constraint of a formal generic we want tohave the features of the constraint and not necessarily conformance to theconstraint.
A weaker notion than conformance that does the job is non conformantinheritance. If D inherits non conformant B, then D has all the features ofB. But wait: aren't there some subtle nasty details?
Yes there are.
a) Using non-conforming inheritance you can convert public features intoprivate ones.
b) With repeated inheritance you can inherit more versions of the samefeature. So if the generic class calls that feature, it is ambiguous whichone will be called on the actual generic.
Now you can say: "There you see. Conformance is the better choice."
But I do not give up so fast. I admit that pure non conformanant inheritanceis not sufficient. But why not define a weaker type conformance? Let us callthis form the moment "behavioural conformance".
We say that D behaves like B if all requirements of conformance arefulfilled except that the inheritance path between D and B can containnon-conforming parts. I.e. under behavioural conformance you can win onlyclients and not remove them and you have to resolve all ambiguities withrepeated inheritance with proper select statements.
With that definition "G behaves like COMPARABLE" is sufficient to describethe constraint. A class CG[G: CONSTRAINT] can use all features ofCONSTRAINT on its formal generic G without any problems.
If you in some cases still want to be able to assign type G to CONSTRAINT,we have to distinguish the weaker and the stronger specification of theconstraint, e.g.
I personally think that the default case should be the weaker form, because inthe majority of (if not in all) cases, the weaker form is sufficient. If thestronger form is still needed, it shall be tagged with somethingextra.
So why not use always the weaker form to specify a constraint and get ridof the possibility to assign a formal generic to its constraint?
Reason: I cannot imagine a generic class CG[G] without having features like
How can you import an object of the formal generic type into the objectwithout such features? Without such features can import nothing into the classexcept the default values with self initializing types. But a generic classCG[G] just using default values of G is not very interesting.
So if you have features like "some_feature" and use conformance to specifythe constraint, you can produce catcalls.
You don't believe me. Here is the proof.
Summary: Using conformance as the base to specify constraint genericparameters is a bad choice. A weaker notion like the above defined"behavioural conformance" is needed.
Validation with local analysis (VLA)
Validation is usually (not just in Eiffel; in nearly all typed languages)based on local analysis. I.e. the compiler can check the validity of a routineby just looking at the routine, the types it uses and the parent types (andparents of parents etc.). This is a very useful principle for modular SW. Amodule (a class, cluster, library) can be checked for validity and if itsvalid it remains valid in any context.
As opposed to local analysis, global analysis looks at the whole system.
As long as Eiffel has the possibility of catcalls, local analysis is notsufficient to analyize validity. Look at the feature call
The expression t has a certain type T. This type must have a feature f whichaccepts an argument of type A. As long as the actual argument a conforms tothe type A of the corresponding formal argument of f the call is locallycorrect.
This is the static view. But at runtime the expression t can return an objectof type DT which conforms to T. The feature f within DT might have redefinedthe formal argument to DA, a descendant of A which conforms to A. I.e. theactually called dynamic feature {DT}.f might expect a more specific type DAthan indicated by the static feature {T}.f. The compiler has no possibilitiesto find that out by doing only local analysis. I.e. the constraint of basingvalidity on local analysis only is a severe one.
We can do global analysis to detect potential catcalls at compiletime. Looking at the whole system it is possible to calculate all possibledynamic types of objects which can be returned by the expression t. If theactual argument a does not conform to corresponding formal argument for allpossible dynamic types then there is a potential catcall which the compilercan reject at compile time.
This algorithm (dynamic typeset calculation) is conservative in the sense thatit can detect all possible catcalls at compile time, but not every detectedpotential catcall results in an actual catcall at runtime. Being conservativeis a disadvantage, but not a severe one, since catcalls do not happen veryoften in real SW.
The second disadvantage is that global analysis requires much more computationfor the compiler to decide if a system is typesafe. This might be a problemfor very large systems where only a small fraction of the code is added orredesigned. Even changing one line of code requires a full system analysis tocheck the validity of the change. With modern high speed computers and a goodsearch strategy of the compiler doing global analysis the response time mightbe kept within acceptable limits.
However there is a third disadvantage. The compiler can detect a potentialcatcall, but it cannot say which module causes the problem. The actualpotential catcall is detected in a well working library, the actualpolymorphic attachment might be done in another well working library and theconnection of the two libraries is done in some user code. All this threeparts of the code a locally valid. Where do you fix the problem? Don't bemisled by the already given very simple examples of possible catcalls. Theseare just examples to demonstrate the possibility of catcalls in very simplecontexts. But in real SW the problem can be much more distributed arounddifferent modules which do not interwork directly.
The third cited disadvantage of global analysis is a severe one. For thatreason nearly no modern typed programming language bases validity on globalanalysis. This breaks modularity and abstraction in an unacceptablemanner. Especially for Eiffel which has been designed to support theconstruction of large systems validity must be based on local analysis.
Global analysis is a fine technique for a compiler to do optimization. But itshall not be used to check validity!
I.e. we are looking for a solution which guarantees type safety by doing localanalysis.
The solution
Will be updated soon.
The so called paper "The world is covariant. Is it safe?" is as far as I recall a PDF powerpoint presentation we made to explain the problem of catcalls and various possible solutions that have come up so far to this problem at the time of writing. It is in no way a definitive answer as you seem to claim or a definitive view on the behalf of the authors.
I have read my blog entry twice and I found no statement where I say that you and Bertrand say that you have an answer to the catcall problem. I have just cited it because it claims that covariance is a good modelling feature and that promiscuous generic conformance should be restricted (the use of the "variant" keyword in your paper).
But your comment made me read your pdf twice as well. From page 12 on you talk about "The retained solution: a simplified version". And in a email communication Bertrand told me that this "solution" is the one you want to implement.
Why don't you comment on the subject matter? This "I have said what you might have said what I might have meant ..." leads to nothing.