Pokazywanie postów oznaczonych etykietą cogsci. Pokaż wszystkie posty
Pokazywanie postów oznaczonych etykietą cogsci. Pokaż wszystkie posty

czwartek, 8 sierpnia 2013

Reading Gary Drescher "Made Up Minds"

The "schema mechanism" system developed in Gary Drescher's PhD project reminds me of Anticipation-Based Learning Classifier Systems, but it is more AGI / cognitive-architecture worthy because of its representation building facilities. The book is very novel for its year 1991, before modern RL theory became popular in AI. ABLCSes are quite recent development in LCSes (although with an early publication in 1990). I have (good) sentiment towards Learning Classifier Systems, they were my first encounter with a more cognitive form of AI, very long ago.

It would be cool to redo Gary Drescher's project, but using Rich Sutton and Hamid Maei's recent results -- Gradient Temporal-Difference Algorithms with off-policy learning -- instead of schemas.

The principle behind Gary's project is constructivism, the opposite of nativism. Almost all structure of the world, even what Kant claimed to be necessarily needed a priori -- binding of experience into objects having features -- can be learned from input by a relatively simple mechanism. You might think that therefore constructivism isn't only radically against Chomsky, but also David Deutsch'es ideas I've been quoting recently --  bashing of empiricism and of logico-positivism. But consider this: the construction algorithm, to succeed, must have universal reach in David Deutsch'es terms. And Gary Drescher accepts the criticisms against logico-empiricism, "Even the most rudimentary conceptions of the physical object cannot be defined by the schema mechanism as any function of the sensory primitives." Section 8.6 stresses "Why Non-naive Induction Must Be Built In"; and the mechanism needs to solve similar problems with counterfactuals. The system uses counterfactuals for learning and concept invention. "The difficulty and obscurity of the concept of counterfactuals is, I suspect, a reason that its fundamental importance for learning systems has been late to be recognized, rather than a reason to consider it an implausible basis for learning."

Notes below focus on chapters 3 and 4 as these describe the mechanisms of the cognitive system. I pick a couple of nuggets from later chapters. Chapter 2 presents Piagetian theory, elaborating on initial stages of development. It is worth reading in full.
  1. Conditions are conjunctions of items. Primitive items are sensory facts and synthetic items are beliefs.
  2. Primitive items are binary: On / Off, synthetic items are ternary: On / Off / Unknown.
  3. Objects and relations between objects are supposed to be stable configurations of schemas, synthetic items and composite actions.
  4. Schemas are identified by a triple: preconditions, action, postconditions.
  5. Primitive actions change state of the world and agent.
  6. Composite actions are identified by a postcondition they achieve.
  7. Therefore a schema might be a refinement of the postcondition of a composite action under given preconditions.
  8. Accessible value in a state is the maximum value achievable along a reliable path from the state.
  9. Instrumental value is assigned to items along reliable paths to a goal. I.e. items in a state have its accessible value as instrumental value. Instrumental value is transient.
  10. Delegated value is assigned proportionally to (1) the difference of average accessible values of an item: when the item is On minus when the item is Off; and (2) the duration of the item being On. It is more permanent.
  11. Item of frequently instrumental value only has delegated value when it is not "readily accessible" -- readily accessible items have the difference in accessible values (between their On/Off states) close to zero.
  12. To avoid runaway of propagating delegated value through cycles, the value propagated is half of the delegated value.
  13. Attention via hysteresis-habituation loop: recently selected schemas have more weight but decreasing with consecutive selections.
  14. Reweighting to upsample schemas with rare actions.
  15. Promote actions with inverse effects (turns an item On/Off right after it was turned Off/On) as it heuristically leads to reliable schemas.
  16. Marginal attribution:
    1. Start from bare schema: {}-A->{}; add results in spinoff schemas for items whose positive-transition (from Off to On) or negative-transition rates following action A are higher than their averages for all actions.
    2. Add (negated) items in contexts of spinoff schemas which (anti)correlate with validity of the schema, making the spinoff schema more reliable.
    3. All statistics count only unexplained transitions and correlations -- when the items aren't in results or contexts of existing valid schemas. (Statistics are reset after a spinoff accordingly.) Statistics are only collected by most specific schemas accounting for a situation. This increases sensitivity to regularities and reduces combinatorial explosion.
  17. Schema chaining requires that all items of the (primary) context of a following schema are provided by results of preceding schema. It's used for composite actions, etc.
  18. Schemas have associated extended context (and extended results). Extended contexts and results are mutable (evolve over time). Besides being a data structure for spinoff formation, extended context adds to the condition for activation of a schema. Schema chaining cannot rely on the corresponding spinoff schemas, because it often requires general primary contexts.
  19. For a composite action, besides conditions that need to hold initially, there can also be conditions that need to hold throughout the action.
  20. Synthetic items are designed to identify invariants when all apparent manifestations change or cease (compare Piagetian conservation phenomena).
  21. Keep track of local consistency: the probability that a schema will be valid right after it has been valid, and expected duration of consistency: how long since onset of validity to first invalid activation. (Recall attention via hysteresis.)
  22. A synthetic item is a reifier of validity conditions of an unreliable locally consistent schema, called its host schema -- action called probing action and result called manifestation. I.e. it is the state item such that when added to the context (precondition) of the schema, if the schema were activated, the action would bring the result.
    1. The intention is that the synthetic item captures a persistent feature, like presence of an object, while the remaining context items of the host schema capture transient features, like effector configuration.
  23. Learned verification conditions set the state of a synthetic item:
    1. Host schema trial: when host schema is activated, On resp. Of if it succeded resp. failed.
    2. Local consistency: the state remains as changed for at most a period of expected duration of host schema's local consistency (for On, local "inconsistency" for Off). Then revert to Unknown.
    3. Augmented context conditions: the extended context of the host schema (which collects evidence from spinoff schemas).
    4. Predictions: "If a synthetic item appears in the result of a reliable schema, and that schema is activated, then in the absence of any evidence to the contrary, the mechanism presumes that that schema succeeded".
  24. Above mechanism approximates a synthetic item, but the synthetic item is not coextensive with any function of cumulative inputs (i.e. of the inputs history).
    1. "The schema mechanism grounds its synthetic items in the reification of counter-factual assertions; the subsequent adaptation of its verification conditions is driven by that grounding."
  25. Composite action is created for each spinoff schema that has a novel result.
  26. "A composite action is considered to have been implicitly taken whenever its goal state becomes satisfied [...] Marginal attribution can thereby detect results caused by the goal state, even if the goal state obtains due to external events." Which together with hysteresis leads to imitation.
  27. Backward Broadcast mechanism and action controller learn proximity of schemas (results) to goal states (of composite actions). If reliable chains of schemas are found, they are incorporated into composite actions. The chains are also used for forward prediction.
  28. Action controller handles special cases: indeterministic actions (schemas with same contexts and actions but various results), repetition, on-the-fly repair (detecting schema that make applicable some component of an interrupted action).
  29. Schema with composite action cannot spin off a schema with part of the composite action goal in the results.
  30. There is no "problem resolution" mechanism. Rather, some schemas hit a dead-end, and are taken over by schemas that capture more fruitful regularities.
  31. I haven't understood how inversely indexed representations (synthetic items) work. (par. 6.4.4)
  32. Note that synthetic items do not represent identity of for example tactile-visual objects. This isn't bad though because the system's mistakes reproduce Piagetian errors at corresponding developmental stages. Errors mean unreliable schemas which leads to further development.
  33. Now on to more far-fetched stuff. "The new conception [learned abstract concept] reifies the set of circumstances under which a piece of one's computational machinery behaves a certain way."
  34. "Consciousness requires knowledge (and hence representation) of one's own mental experiences as such; the schema mechanism does not come close to demonstrating such knowledge."
  35. Unimplemented mechanism: subactivation. "To subactivate an applicable schema is essentially to simulate taking its action, by forcing its result items into a simulated-On state (or, if negated, a simulated-Off state)." The simulated states are entirely distinct from actual states and all mechanisms are duplicated for them. But statistics are shared, and spinoff schemas are created "for real".
    1. Simulations are serial but parallel chaining search will cache the knowledge.
  36. Unimplemented mechanism: explicitly represent inverse actions to make them available for subactivation (i.e. simulation).
  37. Override generalizations: when a derived (by simulation) schema prediction is wrong because a direct schema from which it was derived is overriden, derived schema should be overriden too, without penalty for the derivation. A new schema will be created to capture this exception.
    1. "The suggestion is that deductive-override machinery may permit the schema mechanism to escape the fallacy of naive induction. The key is to regard the conflict between a reasonable generalization and an absurd but always-confirmed generalization as just another conflict between generalizations expressed at different levels of description."
  38. "The reason [not to build in] a variable-matching implementation of generalizations, is just that there is no apparent way to support such an implementation without abandoning the constructivist working hypothesis by including domain-specific build-in structure. [...] Perhaps the system itself could be designed to devise explicit structured representations to support variablized generalizations. [I]f virtual generalization fails, devising such machinery may be vital to the schema mechanism."
  39. "A schema's extended context is essentially a connectionist network solving a classifier problem."
  40. Unimplemented: clustering (i.e. hierarchical modeling); "having coordinated coarse- and fine-grained spaces mitigates the combinatorics of showing the path from one fine-grained place to another, because the path can be represented as a coarse segment to get in the right vicinity, followed by a fine-tuning segment."
  41. Unimplemented: garbage collection. Schemas: not contributing to goal achievement, not spawning new spinoffs, seldom activated, perhaps even those that are activated but recreation opportunities are more frequent than activation opportunities.

piątek, 7 września 2012

Reading Thomas Metzinger -- intentionality

Based on Thomas Metzinger "Being No One", starting from section
6.4.3. What is intentionality anyway?

"[T]he content of a perceptual state really is not a part of the
environment, but a relation holding to this part [...] Full-blown,
phenomenal self-consciousness always involves a relation between the
self and an object component."

"Some stages [of attentional agency] are conscious, some are
unconscious.  As a whole, this process displays an extremely high
degree of flexibility and short-term adaptability, involving the
explicit internal simulation of alternative objects for attentional
processing. We like to call this “selectivity,” [...] What there
is, in the sort of phenomenal agency involved in focal attention, is a
globally available representation of the process in which different
alternatives are matched against each other and the system settles on
a single solution."

"If a system integrates its own operations with opaque mental
representations, that is, with mental simulations of propositional
structures that could be true or false, into its already existing
transparent self-model while simultaneously attributing the causal
role of generating these representational states to itself," wait, how
does one do that?

"Opaque mental representations" are simply those that are not perceived as standing for reality. There is conscious meta-representation, but it doesn't have dedicated channels. It is just the knowledge that some experiences are not veridical -- for example the doubling of the world when one presses the sides of the eyeballs. And it is also the knowledge of how to use mental faculties to manipulate higher, more abstract layers of "modality stacks". Opaque are the representations that arise by top-down modulation instead of being driven by the inputs (the senses, the motor feedback). "[In volitional thought] the object component is opaque. We know that we take a certain attitude toward a self-generated representation of a goal."

"Please note how a phenomenal first-person perspective now reveals itself as the ongoing conscious representation of dynamic subject-object relations: to see an object, to feel a pain, to selectively “make a thought your own,” to chose a potential action goal, or, to be certain of oneself, as currently existing." Metzinger leads me to the conclusion that we have a self-perception modality, as I mentioned at the end of foregoing comment. The subject-object relation character of experience comes from cross-modal binding with this modality. But... "Cognitive self-reference, therefore, on the phenomenal level is necessarily experienced as direct and immediate, because it is not mediated through any sensory channel (it takes place in a supramodal format) and because of the fact that it is a second-order process of phenomenal representation, is not introspectively available (naive realism)." So is that a wrong conclusion, is self-perception supramodal? Although he speaks here about the cognitive layer, the "channel" refers to the earlier (phenomenal) layer of what I might call "modality stack". The supramodal aspect is just the more abstract entities arising from cross-modality binding in higher layers. Modalities correspond to senses, so perhaps we need a different term because we also need to cover "effector stacks", the layers of the "motor cortex".

"In short, phenomenal models of the intentionality relation consist of a transparent subject component and varying object components, which can be transparent as well as opaque, transiently being integrated into an overarching, comprehensive representation of the system as standing in a specific relation to a certain part of the world. [...] Episodic memory is a process of reconstructing what was here termed a PMIR, because one necessary constituent of memory retrieval is not simply the simulation of a past event, but an association of this simulation with a self-representation. [...] Reactivating a PMIR inevitably means reactivating a PSM." Patients without the PMIR are zombie-like. "Akinetic mutism is a state of wakefulness, combined with the absence of speech, emotional expression, and movement. Obviously, for such patients there is an integrated functional self-model, because they are able to briefly track objects, pull a bedcover, or, if forced, say their own name. [...] What the outside observer experiences as a vacuous stare or emotional neutrality is the complete absence of any willed action or communicative intention, the absence of a globally available model of subject-object relations (and, as can be seen in lacking the desire to talk, of subject-subject relations as well)." 

"The experience of agency seems to be the ongoing representational dynamics collapsing a phenomenal model of the practical intentionality relationship into a new transparent self-model. [...] It is important to note that at least two different kinds of ownership will be involved in any accurate description of the phenomenology: ownership for the resulting body movements [PSM], and ownership for the corresponding volitional act [PMIR], for example, the conscious representation of the selection process preceding the actual behavior." The PMIR is an integrative capacity.

I haven't read chapter 7 but PMIR seems to me to just describe various binding processes where one of the components is the transparent part of the PSM. I don't think these binding processes form a very distinctive separate module. If you describe the structure of the PSM, with transparent lower layer and more abstract higher layers etc., and describe the binding processes in general, there doesn't seem much to add specific to the PMIR. ETA: binding is not the right term since I meant processes that integrate across objects, binding refers to integrating experiences into objects. This large class of integrative processes, complementary to binding processes, deserve a name. Might it be PMIR? But these processes are all over the place, by themselves they don't model anything. And the integrative processes were already covered in the first part of the book. Perhaps Metzinger's thesis is that there is something special about this subclass of them.

piątek, 17 sierpnia 2012

Reading Thomas Metzinger -- the self


What is the role of the self? Based on chapter 5, (mostly)
subchapters 6.1--6.3, and section 6.4.2 of "Being No One". Note that I
only cover stuff where I want to add (or question) something, many
interesting things get omitted.

Phenomenal Self Model


It seems that the Phenomenal Model of the Intentionality Relation is a
structure and process for managing attention. When an object is
integrated into PMIR, the directing of attention towards that object
is available phenomenally (i.e. available for action control, concept
formation and "higher order" attention). PMIR will be basis of next post.

Integrating the Phenomenal Self Model into the world model provides
for relations between the PSM concept and the concepts of "outside"
objects. Such relations are basis of goal-directed behavior (when a
simulated relation is different than an actual relation).

The PSM integrates also the representation of any process that
preattentively integrates a set of features as an object. Only then
the object becomes attentionally available, so this is even more basic
than PMIR, it enables all "higher order" relations. I'm not sure
whether Metzinger thinks the object is available for concept formation
prior to this integration into PSM, it is unlikely given the lack of
attention. But for sure, what is here integrated is the distinct
concept of perceiving the object. Now it becomes possible to have the
goal of "taking a better look" at an object. This is a considerable
limitation over how I imagined attentional availability, but since a
global availability means integration into world model, it shouldn't
be much harder to also integrate it subjectively, i.e. into the
PSM. The very object formation already happens in the world-model and
the PSM, and in this process the object-encoding processes become
mental representations, i.e. potentially globally available.

PSM is distinguished in the representational space by high
invariability. Attention (for PSM and generally) might be driven to
"places" where the output of the model and the input of the senses
differs the most. But the background self-awareness is always present.

All properties of the system are represented in one integrated data
format, i.e. the PSM is "holistic".

"The nervous system, and this will be true for the particular case
of self-representation as well, is not so much a top-down controller,
but more a system whose task consists in generating adequate patterns
within [the body - nervous system - environment] overall
dynamics. [...] [I]f an internal self-model is to be a successful
instrument in predicting the future behavioral dynamics of an
individual system, it must in some way mirror or reflect important
functional characteristics of precisely that part of its internal
dynamics, which in the end will cause the overt actions in question."

"A fully grounded self-model would simply disappear. In principle,
phenomenal selfhood emerges as long as there is a conflict or
incoherence between bottom-up and top-down processes, between
expectancy and actual perception." The discrepancy draws attention to
the respective emulators in general, making them conscious,
i.e. globally available for flexible reaction. "A certain level of
autonomous, residual self-modeling is preserved."  Note that dynamics
(including goal-directedness) is part of the self-model.

Revisiting Transparency


Metzinger makes a general comment that I might have missed in my
discussion of transparency (don't remember). Content is phenomenally
opaque when it is presented as representational, i.e. as correlated
with a different presented content (for example, an imaginated
rehearsing self is correlated with the actual self integrated into the
"Now"). But later in the section he goes on with the old confusion :-(
First, by having a "fully opaque" self experience being an experience
of a ghost or spirit (disembodied entity). Later, he seems to say that
opaqueness is the "presentation" of misrepresentation in
presentational content. But then he writes "There is always
self-presentational content, there are always emotions and gut
feelings, for instance, and presentational content is always fully
transparent." Yet this is specifically about the self, so that's
OK. It looks like "presented as misrepresentational" is achieved by
having the relation, but missing the target process with which the
opaque content is supposed to correlate.

This transparency confusion makes it difficult to decipher notions
such as "nemocentric reality model (centered on a globally available,
but fully opaque self-model embedded in the current virtual window of
presence)", because here it certainly is not about presenting
misrepresentation (in neither the simulation nor the
pseudo-hallucination form). It is also not (or is only to a degree)
about representing an independently presented content (neither in the
simulation sense nor in the "constitutionally earlier stages of
processing" sense). Although I have an intuition what "fully opaque"
is supposed to mean here, where is the meat? "For any phenomenal
representation, its degree of phenomenal opacity is given by the
degree of attentional availability of earlier processing stages."
Yeah, you've already told that... OK, I uderstand "fully opaque"
as that the dependency of every processing stage on an earlier stage
(or generally -- bidirectionally, on other stages) is represented.

"My hypothesis is that the phenomenon of transparent self-modeling
developed as an evolutionary viable strategy because it constituted a
reliable way of making system-related information available without
entangling the system in endless internal loops of higher-order
self-modeling." No, the solution was already flourishing before the
problem developed :-)

Transparency finale: "A transparent representation is characterized by
the fact that the only properties accessible to introspective
attention are their content properties. It does not allow for the
representation of a vehicle-content distinction using on-board
resources. [...] If I engage in typical cognitive activities like
reasoning, and if I then direct my introspective attention to this
process as it unfolds, I experience myself as operating with internal
representations that I am deliberately constructing myself. They do
not imply the existence of their simulanda, and they might be
coreferential with other mental representations of myself without me
knowing this very fact. [...] [T]he phenomenology of transparent
experience is the phenomenology of not only knowing but of also
knowing that you know while you know; opaque experience is the
experience of knowing while also (nonconceptually, attentionally)
knowing that you may be wrong."

Role of Objects


"Global availability of information always means availability for
transient, dynamical integration with the currently active
self-model." I don't know what's the added value of PMIR over just
integration into "the window of presence and global model of the
world", we'll see...

Does "Phenomenal self-presentation is anchored in mental
self-presentation" mean that what is experienced is always part of
what could be experienced? I.e. that there's always more to direct
attention to? It probably speaks about the world-model and self-model
structure.

"It is, of course, an interesting question, whether the abstract,
normally unconscious processing stages preceding volitional and
phenomenally self-modeled movement selection can already count as
egocentric representations, or whether this is precisely the step at
which those computations are integrated into a self-representation,
which also makes them conscious. In any case it now seems plausible to
assume that what gets integrated into the PSM of the organism as a now
deliberating subject is a determinate, single, and concrete
representation of a specific behavior."

The PMIR might be the key to intersubjectivity: its representation
can be its object.

Goal representations (via goal-encoded objects) are in nonegocentric
frames of reference, they can be integrated into the PSM for actual
behavior and for self-simulation, they can be integrated into
"allocentric frames", simulations of others integrated into PSM as
simulations, or just unconsciously activated by mirror neurons. And
mirror neurons first emulate low-level, non-goal movements. Low-level
and high-level resonance mechanisms do not coincide. Goal-encoded
objects are object representations with selection mechanisms for a
repertoire of actions like various grasping behaviors.

Linguistic concepts are much more than simple concepts (i.e. processes
that represent other processes non-homomorphically but by activation
links, and so gain recombinability), they are goal-encoded objects,
having qualia as all objects, here the words or other symbols.

czwartek, 2 sierpnia 2012

Reading Thomas Metzinger -- vehicle vs. content


On page 294 (5.4, "From Mental to Phenomenal Self-Presentation:
Embodiment and Immediacy"), the author writes "There will be a level
of elementary bioregulation, arguably a level of molecular-level,
biochemical self- organization, at which it simply is forced—from a
conceptual third-person perspective— to maintain the distinction
between content and vehicle." and later writes "As soon as more
empirical data are available, it will be a task for philosophy to
demarcate a more fine-grained level of description on which it is
plausible to assume a full match between content and causal role, that
is, the identity of vehicle and content." This is very unclear to me.

(BTW: Later, the author writes about the brain being insensitive to itself,
but does not discuss headache.)

There is an intentional vehicle-content distinction, and a phenomenal
one. The intentional, i.e. referential, content is obvious, it is the
referents (the representanda). The phenomenal content is "the way
certain representational states feel from the first-person
perspective." In 8.2 "Preliminary Answers" answer to "What is the
“phenomenal content” of mental states, as opposed to their
representational or “intentional content?”", "It is a special form of
intentional content, namely, in satisfying the constraints developed
in chapters 3 and 6." It cannot be right, since the constraints can be
only satisfied by intentional vehicle, not content. Is it supposed to
mean, that "phenomenal content" is the semantical aspect of
(phenomenal) experience? That would be an interesting thesis: how
conscious processes "feel" is what they mean. Continuing the question,
"Are there examples of mentality exhibiting one without the other? Do
double dissociations exist?", Metzinger says:

"Double dissociations do not exist. There certainly is unconscious
intentional content. A lot of it. But in ecologically valid standard
situations there is no conscious state that is not a representational
state in some way (for a nonstandard situation, cf. the abstract
geometrical hallucinations [...] [which are] purely Phenomenal
content). [...] there is no example of phenomenal content that is not
also directed at some target object, property, or relation. Please
note that this does not mean that the experiential subject has to have
the slightest clue about what the intentional object of his or her
experiences actually is. In many cases, for example, in living through
diffuse feelings and emotions (like jealousy), the original
intentional object may be millions of years away. It may not exist
anymore. The original representandum may be something that was only
present in the world of our distant ancestors."

poniedziałek, 30 lipca 2012

Reading Thomas Metzinger -- transparency

Transparency.


Reading "Being No One", the biggest problem I've found so far is the
"Transparency" constraint (3.2.7, of 3.2 "Multilevel Constraints: What
Makes a Neural Representation a Phenomenal Representation?", of 3 "The
Representational Deep Structure of Phenomenal Experience"), and the
related "Homogeneity" constraint (3.2.10 "“Ultrasmoothness”: The
Homogeneity of Simple Content"). Perhaps I'll grasp it making this
note.

Warning: the notion of attention I use below is the technical one
defined in the book.

Although the author appears to define "transparency" precisely, I'm
not sure the bundle of accompanying examples fits into a single
concept. It is spanned between two aspects.

The first is phenomenal simplicity: experience is transparent, when we
cannot direct attention to any more of the details of the process over
which the experience supervenes.  I.e. there is a strict unpenetrable
border of attention, the aspects of the preception process which (at
least potentially) are part of the experience, are the "portions" of
this process to which we can direct attention. And there are large
portions of the perception process to which we just cannot direct
attention at all, at least in normal conditions (e.g. not under
psychedelic drugs, some drug-induced artifacts might be transgressions
of this attentional border). This view of transparency makes the
experience "substantial": things (and I'll add thoughts as well)
appear to us as they appear to us to fundamentally be, we have no
experiential clue that the appearance could be, for example, an
abstraction or a statistical inference.

The second aspect is phenomenal givenness (veridicity?). In normal
wakeful awareness (and in some nocturnal dreams, those that are
"realistic" i.e. vivid and non-lucid), we are predisposed to be naive
realists. We experience being immersed in a world as it independently
of our act of perception is. Experience is transparent when it is
experienced as exclusively about the actual world (or the actual
us). Experience is transparent when its content is experienced as its
only cause. Under this aspect, thoughts, imaginations and lucid dreams
are phenomenally opaque (and therefore also plans, mental rehearsal,
etc.) They (usually) are perceived as "representing", as standing
for something (since only actual stuff can be present).

Of course these two aspects are related, for example we can talk about
ineffability and immediacy in both cases. But there might be
experiences that are one but not the other. It's likely that in vivid
dreams (and lucid dreams are vivid since a lot of the brain is waken
up) we experience qualia. Is Thomas Metzinger taking the two aspects
as two sides of a single coin, or just defining "transparency" as
conjunction of "givenness" and "simplicity"?

The author's definition of phenomenal transparency:

"For any phenomenal state, the degree of phenomenal transparency is
inversely proportional to the introspective degree of attentional
availability of earlier processing stages."

Earlier processing stages are "temporally earlier" in the aspect of
phenomenal givenness, and are "constitutionally earlier" in the aspect
of phenomenal simplicity. In both cases the "earlier processing
stages" are the "internal causes" of the experience.

I think from what Thomas Metzinger writes that he might think, if we
could direct attention to, for example, edge detectors in visual
processing, as long as we rest our attention at the edge detectors, we
would have to "will the perception into existence" of objects that
constitute the normal visual experience, for them to appear to us. By
analogy to how we experience imagination, where we have to "will"
imagined objects "into existence". The objects wouldn't just
effortlessly appear besides the edges. In 3.2.10 paragraph
"Homogeneity as an Informational-Computational Strategy" (he writes
such paragraphs for each constraint) he states: "Without homogeneity
we could introspectively penetrate into the processing stages
underlying the activation of sensory content. One obvious consequence
of this would be that the multimodal, high-dimensional surface of our
phenomenal world would start to dissolve. We would then phenomenally
experience the model as an ongoing global simulation permanently
generated from scratch, as it were, and thereby it would inevitably
lose the phenomenal character of being an untranscendable reality."
What is normally perceived is always a persistent, "maximum a
posteriori" object. Fixing too early processing stages, like edge
detectors, might be disruptive to this inferential process.


What are your thoughts? Do you think that phenomenal givenness implies
phenomenal simplicity? What about vice-versa? Do you think that the
objects of imagination are always given to us relationally, we always
grasp the processuality of their coming about? Are there imaginary
qualia? Do you think that attentional access to (constitutionally)
earlier processing stages dissolves the experiential immediacy of
later stages?

Lucid dreams are interesting because we can affect the scene
construction by directing our attention to toplevel fragments of the
top-down information propagation in scene construction, while the
bottom-up information propagation proceeds unaffected. I'd say that we
have "givenness" when we do not clamp any fragment of the toplevel
layers for the top-down information propagation of a given scene
(either because we cannot since the bottom-level layers are clamped by
sensory input, or because we don't realize that we can). (I'm sorry
for the homunkulus-like way of speaking.) I'd say we have "simplicity"
when the bottom-up information propagation starts below the lowest
attentionally available layer, rather than in the middle, for the
scene. I'm finishing 3.2.10, lucid dreams are covered somewhere in the
next chapter.

Chapter 4 "Case Studies I".


Let me pick up more issues in "Being No One", perhaps the glitches are
my own stubbornness.

The first problem is when the author claims (4.2 "Deviant phenomental
models of reality", 4.2.1 "Agnosia" p. 220/221) that a patient who
uses chromatic information for shape formation and motion detection,
but has no experience of color (his visual experience is in shades of
gray), has color cognitively but not attentionally avialable. The fact
that chromatic distinctions feed into shape formation has obviously
nothing to do with recognizing color conceptually, no? Chromatic
vision feeds into formation of concepts here, but not of color
concepts.

4.2.2 "Neglect". Similarily, hemineglect no doubt is an attentional
deficit (and a deficit of the "model of intentionality relation"), but
likely the deficit is simply because of the lesion of processing
stages leaving nothing to attend to and model. There's less of a
problem because the author doesn't say otherwise. He sort-of analyzes
the minimal conditions that could generate hemineglect.

Anton's syndrome discussion focuses on the self-modeling deficit
without mentioning whether the offline phenomenal experience
(nocturnal dream like, but top-down modulated) is also absent. I guess
it is.

4.2.4 "Hallucinations", Charles-Bonnet syndrome; "percepts are missing
characteristic features and are simply superimposed on, but not
semantically embedded in, the phenomenal model of external reality" is
contradicted by the following example. The patient reports pragmatic
and (slight) phenomenal abnormalities as distinguishing hallucinated
content, semantically it seems to be OK.

Phenomenal transparency/opacity is again distincitly used in two
senses, one is transparency as "phenomenally normal experience", and
the other, inflated meaning is opacity as "believed to not correspond
to reality".

I don't think it's likely that "earlier processing stages" can become
directly available for attention, because of architectural
limitations. It's more likely that they become available indirectly by
"polluting" the bottom-up signal with "vehicle properties" (the
"maximum aposteriori distribution" puts too much weight on the
consistency of the lower layers, so they fixate before propagating
information upwards). It is still a form of attentional
availability... Perhaps this could be not indirect, but an important
(if not the primary) mechanism of attentional availability? For
Metzinger, attentional availability is "subsymbolic
re-representation", i.e. additional neurodynamical structure is formed
that correlates with the original phenomenon and propagates
information about it.

You might be asking how this differs from cognitive availability,
i.e. "symbolic re-representation". The attentional structure is
(neurodynamically) homomorphic with the original phenomenon and so
cannot be reassembled in arbitrary contexts, while the symbolic
structure is only activationally linked with the original phenomenon.

4.2.5 "Dreams", "phenomenal dream content is not attentionally
available" -- obviously, "All there is is salience-driven, low-level
attention." Again we have two concepts conflated, "attentional
salience" and "volitional attention". Phenomena have to be
attentionally available (in the attentional salience aspect) to be
even minimally conscious. As later noted, this is related to the
distinction from chapter 2 (p. 36) between four forms of
introspection:

1. external attention
2. consciously experienced cognitive reference
3. inward attention / inner perception
4. consciously experienced cognitive self-reference

Introspection 1 is attention (subsymbolic re-representation) toward a
"world" experience. Introspection 3 is attention toward the
self-model, it "is generated by processes of phenomenal
representation, which direct attention toward certain aspects of an
internal system state, the intentional content of which is being
constituted by a part of the world depicted as internal".
Introspection 2 and 4 are the symbolic variants. Metzinger says
"introspection 3 is almost impossible in a dream state, because
high-level attention is absent." Obviously, most dreams feature a
phenomenal self, only it is not a volitional self because of the lack
of deliberation. Metzinger seems to say that in dreams, attention
cannot rest on the self-model, it only integrates features of the
self-model selected by the generated world-model. But it's an
attentional deficit (lack of high-level attention). "You cannot
introspectively attend even to your most simple sensory perceptions in
the dream state, because you are not an attentional subject."


(Anecdotally, I think I've had dreams without a phenomenal self -- dreamt
from the third person perspective, or, remembered as movies; I've certainly had
dreams with a phenomenal self that was not my actual (waking)
self-model, remembered as movies with a distinguished
character. Normally I have dreams remembered in roughly the same
format as normal memories.)

P.S.

Later the author makes it even more evident that he thinks of the presentational content, i.e. the processes that are only attentively but not cognitively available, as transparent, and of representational content, i.e. cognitively available processes, as opaque. As a tendency perhaps.