Understanding the Meaning of Understanding
Can we train a machine to detect if another machine has understood a concept?
In principle, this is possible by conducting tests on the subject of that
concept. However we want this procedure to be done by avoiding direct
questions. In other words, we would like to isolate the absolute meaning of an
abstract idea by putting it into a class of equivalence, hence without adopting
straight definitions or showing how this idea "works" in practice. We discuss
the metaphysical implications hidden in the above question, with the aim of
providing a plausible reference framework.