(redirected from Fabien.StructuralInformationAsymmetries)
On ne s'appuit que sur ce qui resiste.
André Malraux
Cloud of words present in this page, circling is used to link word with similar meanings.
The network of models on the right represent a popular model of each layer
linked by symbolic one-way function (OWF).
Problem
If distinct layers have intrinsic properties rather than being a construct resulting of the level of analysis of researchers, how did those appeared and remained stable? Why is computation across layers remaining difficult? Why is there a costly increase of complexity while evolution is not a directional process?
Minimally required assumptions
Explicit and hierarchical conceptual toolbox.
Model
within arm-race environment
General stable structure = OWF(...OWF([OWF([independent elements])+other maintaining functions,OWF([independent elements]),...]+other maintaining functions))
- f a process represented as a function
- f(x)=y an externally visible action (either directly or through its waste products) resulting from internal behavior x of f
- g(y)=f(x) the invert function of f such g(f(x))=x
- h(g,f) the hardness of relative cost of O(f(x)) to O(g(x)) with h(g,f)=0 for identical complexity and h(g,f)=+∞ for maximum theoretical hardness
- consider if h should be a metric or not (e.g. negative value, symmetric, ...)
- can be visualize as the value of a directed edge between two vertices on a graph
- the higher layer being a grouping of vertices from the lower layer hypergraph could be used instead
- the harder f is to invert, in general or for a restricted set of values, the costlier the controllability
- C() controllability of a system (a graph) by another (also a graph)
- problem so far unary function
- have to consider it from another layer?
- relC(A,A) as relative controllability is not necessarily equivalent to C(A), rather it is its theoretical limit
- most likely never happen for a complex system as it would have prohibitive costs, delegation with imperfect control is most likely to happen, cf self-models (BeingNoOne)
- cf stabilizability, especially as the system must but not overreact against internal small fluctuation, cf also homeostasis
- relC(A,B)<=>relC(B,A) or (relC(A,B)<=relC(B,A) or relC(A,B)=>relC(B,A))
h(A,B)
->
A----B
<-
h(B,A)
if h(A,B)>>(B,A) or h(B,A)>>(A,B) then it can be represented as one single arrow giving
o o-
^ ^
|/
o<--s-->o
/|
v v
+o o
Simply consider Wikipedia:Multigraph#Labeling.
creation of an "milieu interieur" (intuitively seems similar to the cell membrane, the simplest configuration by being an ellipsoid)
relC(A,B) ∝ h(A,B)
- evaluate
- Rényi entropy on the distribution of possible behaviors and visible actions to measure hardness
- metrics from graph theory/combinatorics/...
- degree distribution to measure controllability
- bridge between information theory, number theory, combinatorics and graph theory
- Lotka-Volterra or Richardson's model to define the arm-race closure
- resulting in fluctuating values of h()
- fitness based on relative value of h() to the rest of the population yet taking account energy spent on amplification too
- fitness: best return on investment in using the most stable set of components with maximum controllability for oneself and not other and maintain on oneself the lowest level of controllability
- the cost of hardness amplification, even across generations
- representation as one hypergraph
- stack-up graphs (but is it really representative?) thus have just one model
- more generalist
- might require high dimensional matrices
- which axioms from graph theory hold?
Methodology
CS-inspired writing methodologically
- consider it equivalent to
require_once()
calls
- having to do it in order to maximize efficiency and quality while minimizing weak dependencies
- then calling inline the functions within those packages as papers
- and indexed at the end through the list of reference for efficiency and clarity of "execution" by the reader
- the reader is equivalent to an interpreter with strict constraints consequently
- concepts have to be explained and in the right order
- data have to be provided
- required leaps of logic are to be spotted and fixed (as if it was not just syntax errors but rather broken logic flow)
- ship the MVP and iterate
- test often based on the dataset
ResearchRoadmap also mentions Feyerabend and tries to provide a set of tools rather than a strict method
1st iteration of the Research snake (31/05/2011)
- if distinct layers exists and are not just the result of levels of analysis, how is it that they remain stable and that computation across remains difficult?
- theory review/frame question
- OWF, Controllability, complexity, informational physics, POWF/PUF, recursion/fractals
- is OWF the stabilizing mechanisms that allows for complexity to grow relentlyessly despite its energy cost?
- method
- list model of each layer extract from its relevant dataset
- check for OWF properties within this model
- as external communication (protecting against peers), as internal communication (protecting against constituents)
- ?
- ?
Verifications
from the possibly easiest to the harder
- simulate amplification hardness by evolutionary computation
- find patterns of OWFs in models of each layer
- find patterns of OWFs in phylogenies
- find recursive OWF in the highest know layer
Note that in simulations OWF can just be simulated as weight on edges, there is no need to actually use proper OWF e.g. costly factorization of primes. In the same way hardness amplification can be simulated by increasing the value of the weighted edge. Relaxing this constraint by setting equal weights on the 2 edges of the same vertex can be use to test stability then controllability without OWF, if this allow for growth in complexity then SIA does not hold. If OWF are required though exploring the threshold value if there is one could yield very interesting result, percolation models could be helpful there.
Naive process
Model based
- test if a system without OWF properties can remain stable
- if it is a minimum assumption
- if it can be applied recursively
- etc
Tools
Evidence based
- list networks to study
- e.g. phylogeny of OWF in DNA genomes?
- justify graph/network datasets as the unifying through Barabási's recent Science paper WithoutNotesMay11#TamingComplexity
- get their datasets
- list properties of one-way functions (OWF)
- check if networks exhibit properties of OWF
- if this is the case at all level
- thus it is the case at the last level of maximum complexity too
- thus applying recursively there
- consequently one could expect a fractal "trace"
- no result for "fractal one-way function" and "Recursive One-Way Function" returns 2 patent results (1995/1997 on car security system)
- note that there might be not be a large number of substrates that would allow that, e.g. POWF/PUF without recursion could occur with glass cubes (cf paper) but might be very difficult to make on others
- might also be useless since recursive and non-recursive OWF might be equivalent thus recursion would only make sense if developed over time, i.e. based on evo-devo (phylogenesis+ontogenesis)
- its encoding might probably different for each level though since "interpreter" would be different
- if it is possible, look for a fractal OWF at the last level instead of on each level
- consider its opposite, what is falsifiable?
- is there a complex system that does not rely on OWF?
- not this require a full coverage of OWF which seems not be the case now
- are there no OWF in nature?
- overall is the distinction between an OWF and other functions meaningful? is it too blurry a family of functions?
Remarks
Those remarks should be grouped and overall structured, currently they seem to be independent assumptions rather than a coherent and ordered pieces of information. The objective is to bootstrap the explanatory process to reach the explanandum in the most efficient and safe fashion.
Differences : intra-layer / extra-layer, cooperation / exclusion, integration / differentiation
- through the inevitable creation of waste every process create a potential signature that can be used to invert it
- continual hardness amplification can be considered wasteful yet is required even against "simpler" organisms that might just thanks to time find new ways (Red-Queen hypothesis)
- organisms at any level "attackers" in the sense used in cryptography and security with the sole goal of egoistically maximize energy intake and from that only derives the rest
- also recall Landauer and Benett, i.e. that since information is physical, it should not be a surprised if physical organism would exhibit interesting properties from information theory
- the economical framework only applies to organisms with negative entropy (cf Lorentz p21, Schrodinger writings on life e.g. Wikipedia:What Is Life%3F or LayeredModel#refbiologyseparation) as they trade resources for more resources whereas lower systems just dissipate
- a situation that no rational economical actor would willingly accept, if resources are trader for more resources the margin have to be extracted from somewhere and thus potentially some other actor wealth too, consequently it has to be conducted through an asymmetry of some sort, most likely of information or physically (which should be equivalent according to MRA_InformationalPhysics)
- one can see a poetic illustration in Zibaldone - Il giardino sofferente by Giacomo Leopardi, 1826
- spectrum or gradient of asymmetry rather than a binary separation (thus of leveraging OWF) based on economical principle and underlying criteria like how central it is for the organism, how repetitive the collaboration cycles are, etc...
- if so a visualization could be meaningful since it is not necessarily linear
- applies to control/hierarchy, a truly peer-to-peer system then would not necessarily display properties of OWF (discussed with Paola the 28/05/2011)
- only the uppermost level (one can imagine a pyramid of levels) can be purely collaborative (peer-to-peer) as it is not an affordance for another organism trying apply controllability
- explored if such peer-to-peer collaborative system which allows for its participants to define their protocol and leave at will have no OWF at all or rather have bi-directional OWF
- is a bi-directional OWF relation equivalent to a classical coupling relationship?
- note that almost all DHT-based system use a strong one-way hash function (but for what, protocol and exchange of peers or simply hashing of content with no security implication?)
- the uppermost might allow itself to be inefficient
- whereas a controlled lower level is probably driven to efficiency by its controlling upper level since this last can consider the former through a purely utilitarian view and would dedicate resource to optimize it in particular by removing aspect that is not directly required for itself while remaining stable
- but the underlying levels of an inefficient uppermost level would necessarily in some aspect reflect this inefficiency. The situation could remain sustainable for as long as required resources are not depleted faster than generated.
- yet peer-to-peer technologies have appeared very lately, there might be a radical shift in adoption as the cost of their usage decrease but one can still wonder why they did not appear before
- also the peer-to-peer system is perpetually subject to tension from participants that would want to create benefits for themselves by spending additional resource to control other peers, either individually or by creating new group within the peer-to-peer system, especially if efficiency of participants is not exactly equal
- thus trying to create another layer
- a system can initially appear as using no OWF but rather by "simply" using physical safety over its part, but then one could look at the process that resulted in the system, part of its description or blueprint could then be showing OWF properties
- consider it as a continuous historical process (as roughly considered in Seedea:Research/PhylogeneticFlowProgramming)
- overall OWF might sounds like "the physical coercion of the poor" yet, if one consider physics informational (rather than information as physical) it might just as well mean overall safer
- this would be coherent with the current information-theory centric paradigm shift
- by creating boundaries, a la TheTinkerersAccomplice#Chapter6 and folding, they create internal spaces allowing for new solutions to emerge safely
- OWF and their result are recursively used as affordances
- the formalization required for the model based approach would avoid to only rely on examples which often tends to be anthropocentric rather than objective
- one tends to quickly and often check what is the most important applications for him (e.g. politics or eusocial species) and apply it from their own view point
- which is fine but also limited and if only use as such can create blindspot and invalid assumptions
- OWF are also very practical to separates groups of elements with intrinsic computational properties
- distinguish the self from the non-self or who is inside a group and who is not which is necessary for healthy collaboration by being able to reject elements that are not following the rules established, cf Wikipedia:Elinor Ostrom#Research
- once such functions has been identified as intrinsic to a group and not to another a test can be asked and remains cheap for the tester but too costly for the tested if it is not part of the group
- e.g. immune system, shiboleth, captcha, ...
- arm-races as Seedea:Research/Drive can thus be considered natural OWFs generators or hardness amplificators
- and obviously the cryptography/cryptology arm-race itself in a rather head-spinning recurrent fashion
- this remove the need for any coordinated efforts of participants against non-participants to the arm-race, competition alone increase complexity without regard to the understanding (or rather lack-of) of non-participants
- one can imagine a phylogeny with organisms specific to niches and a correlation between hardness amplification of the OWF and time
- OWF are not just useful to maintain control over lower levels but also to protect against any (learning) organism present in the local niche
- an advantage based on OWF can be at least partly and in principle delegated to the environment , e.g. simply by dueling with the back against the sun before and more recently cryptography
- according to Munz an organism is a the mirror of its environment, thus if OWF exists in nature they could also become embedded
- yet by definition they can't be inverted
- eventually still possible to embed them "verbatim" thus creating a higher level
- use the layer selector visuals ExtendedLayeredModel#UserCenteredCurve
- if SIA is correct then having the right set of tools (e.g. equations) for the right tasks at each specific layer
- if multiple layers then probably through equation of the uppermost one coupled with some specifics of the lower one only when required
- consider a form of "lazy precision" (a la lazy evaluation in Haskell) in which complexity is increased by adding lower level models on an as-needed basis
- are there any efficient evolutionary computation method (EAlgo, EGrammar, DirectedPrograming, ...) for hardness amplification?
- if so then natural evolution could have followed then same path
- the realisticness of the ontological perspective (a la evo-devo) has to be taken into account
- metaphor of "the natural evolution of controllability (thus politics)"
- the value of OWF on very basic layers, lower than chemistry, might have changed over time thus arguing that the model is too stable to have created complexity do not necessarily rule out the hypothesis
Consequences
This section should also be organized in layers and their exploration should be systematic. Which way to follow, ascending or descending hardness should be considered.
Logical implications have to be considered as hypothesis that can potentially invalidate the proposal iff they are implausible.
Information
Physics
- resistant to multiverse physical constants hypothesis since it is a relation between functions, not an absolute value
Chemistry
- improvements in drug discovery software
Biology
- the older the complex system the more stable relative to its environment thus the most likely it is to be based on a strong OWF
- which gives little information of out it arrive there, it can be out of pure initial luck or through constant costly hardening or a mix of both
- allowing for the creation of a safe "self" within the constraints of a completely physically interconnected world that would intuitively make it impossible
- constraints on the domain of what was after each major step (auto-catalysis, homeostasis, ...) extensive search
- boosting the pace of the arm-race
- explaining the growth in complexity through sufficiently stable recursive arm-race processes
- catastrophic scenarii based on loss of control like Wikipedia:Grey goo or viral GMO super-bacteria might still be possible but much more unlikely because of increase in hardness amplification
- applying to AI/AGI too as pace of "intelligence" increase might also be bounded
Neurology
Psychology
- humor as an expression of controllability applied onto oneself or others by showcasing the ability to predict and control thoughts resulting in an unexpected non harmful outcome
Sociology
- shiboleth as social OWF: easy to check, hard to fake and combinatorial (e.g. L'inganno della cadrega by Aldo, Giovanni e Giacomo) as one can combine expressions
Politics
- explaining the stability of hierarchies, including counter-intuitive ones, e.g. Discours de la servitude volontaire, Etienne de la Boetie, 1549
- An information-theory inspired interpretation of politics
- applying OWF to controllability (maintaining control over cooperative sub-systems)
- applying it to economics (using OWF as a tool to make challenges too costly)
- applying it to politics (depilling the layers stack from bottom to the last layer of control)
- e.g. the on-going promise of scrutiny of processes yet its constant complexification
- yet no need for any collaboration between individual but instead result of on-going arm-race
Cross every layer
Economy
- trends of non-regulated markets to increase inelasticity
- use the invert of the proposed model to estimate the perceived value, from the point of the target system relative to its surrounding, of an information based on the strength and number of OWF used to protect it
- actual impossibility to downsize or reduce the "overhead" once it has been reached (simplification or ungrowth leading to instability and risk of increased controllability)
Epistemology
- a physical simulation of a political phenomena should remain computationally prohibitive, only increasingly lossy abstractions (cf discussion with Sylvain on LayeredModel)
- as the simulation of a higher level by a lower level is by "design" (to maintain stability, thus become "interesting") too costly
- also with consequences of an energetical/economical upper-bound quality of virtual reality or simulation-based research (i.e. the classical argument of Hawking's p76-77 of ABriefHistoryOfTime )
- reductionism possible in theory yet probably infeasible computationally/economically
- no necessity to fully understand the lower layers to be stable
- is it meta-stability? does it require a "bottom" layer? can it be auto-stable?
Metaphores and analogies
- cryptological worldview, Russel's Cryptomania generalized
- bringing pillars back to their source
According to diff dates mostly through
- layer distinction as a result of intrinsic computational cost (January 2011)
- cost through one-way functions as a feature against against controllability through (May 2011)
but also experience as a young kid of try to build a safe entirely out of classical Lego bricks yet knowing it was not correct and eventually could never be (cf CognitiveDevelopmentFailure#MyFailureAsAKid) and also Emergence on organized layers.
Details on increase in complexity and downward causality
- are layers appearing "distinct" solely because of complexity of computation between each scale or models (e.g. chaotic system, combinatorial or power law)?
- is it the results of arm-race leading to niche exploitations and those niche are only safe thanks to the computational cost?
- a model that seems to be the main mechanism of security: the use of bijective functions but with radically different cost for the inverse function, see also Wikipedia:Computational hardness assumption
- consider Needs#ComplexityOfInverseFunction
- inspired by MardiInnovation07
- see WithoutNotesMay11#POWF Physical One-Way Functions, Science 2002
- if so what would be the consequence to algorithmically defining a unifying model?
- one-way functions could precisely be what allows growing complexity between layers by securing control in a cost-effective way
- this could be embedded in the topology of the network, consider the link between controllability thus Wikipedia:Degree distribution (cf WithoutNotesMay11#TamingComplexity) and one-way functions
the resulting hypothesis would be that "systems with low controllability represented as network all share properties with one-way functions" not necessarily since those systems can just be closed (to refine)
- current implications
- perpetual arm-race of controllability while maintaining the advantage of cooperation
- thus I:Main/WikisBuffer#gToM
- explore alternatives
- Wikipedia:Entropy (arrow of time)
- why is the 2nd law not sufficient? esp. can it explain growth in complexity?
- compare proposed order and their free energy rate density (FERD)
- if FERD applies, does it involve topological requirements thus similarities?
- e.g. organization of components with identical function between a CPU and a city?
- see also Wikipedia:Scale relativity discovered again through EvoDevo movement
Discussions
- past
- ask review to
- mathematician working on OWF/ZKP/...
- BitCoin workshop presenter
- physicist working on POWF/PUF/quantum information/...
- else Paola's aunt at CERN
- ...
Literature review
Is there a tool like Zotero or Mendeley that allows to draw a hierarchical network with the thesis to write as the root and with for each cited paper displays its references as sources and its future citations (e.g. using Google Scholar or Microsoft Academic) and dotted lines in between?
Mathematical roots of the OWF family
OWF and complexity
- A Personal View of Average-Case Complexity by Russell Impagliazzo, 1995 Complexity Conference
Minicrypt
is the minimum for this hypothesis to hold, Cryptomania
would make it safer thus the precise boundary has to be explored
- or rather the spectrum of proposed world would go from the more unstable to the more stable
- counter intuitively
Algorithmica
would thus not necessarily but so exciting by providing no affordance for complexity to "grow on"
See also Mathematics#OneWayFunction
Mathematical Control Theory
Information asymmetry in physics and economy
OWF in nature
Hardness amplification
Previously explored material
- immune system and Bitcoin WithoutNotesJuly10#DavidLewis
- locked-in cycles principle InformationRules#Chapter5 and later
- extremely trick cycle since it controllability compounds over each iteration
- "Information is costly to produce but cheap to reproduce". (p3) except maybe in the case of OWF, you can solely reproduce the final product but not anything more generalist
- thus maintaining dependency of the source of information
- check how network effect InformationRules#Chapter7 makes it stable and consider it within the constraint of Wikipedia:Murray%27s law
- see also Wikipedia:Coupling (computer programming)
- "Which route is best, openness or control? The answer depends on whether you are strong enough to ignite positive feedback on your own. " (p197)
- "In choosing between openness and control, remember that your ultimate goal is to maximize the value of your technology, not your control over it. " (p197)
- hence the importance of OWF rather than just blocking or ignoring others
- Bioencryption by Team:Hong Kong-CUHK at IGEM 2010
- which does not seem to cite the BioCryptography paper
- bitcoin WithoutNotesMay11#Bitcoin FinancialTools#Bitcoin
- especially interesting since it mixes financial transaction and peer-to-peer system
- seems to be using chained encryption which does look like recursive use of OWF!
- even epistemologically speaking since banks/mints are by definition information hub required to track transactions without disclosing all information publicly
- history of crypto in TheCodeBook and eventually MecaMind
- rather now the phylogeny of cryptographic systems in order to discern patterns of the process itself rather than the complexity of today's tool
- the Price Of Anarchy WithoutNotesSeptember10#TimRoughgarden
- WithoutNotesMay11#TheTaleOfOneWayFunctions including MT(k) the notion of multimemedian time of inverting
Glossary
Define vocabulary a la Wikipedia, first occurrence begs for a link to a dedicated page, since it pulls from multiple domain.
See generated via wordle.net (through @w
Vimperator macro) to gradually restrict to the minimum glossary. Consider regex or manual search-and-replace.
To do
- move most of the older content of InspiredBy to Problem
- insure Wikipedia:Operationalization and maximize Wikipedia:Information-action ratio
- use the resulting model for real-life applications
- improve own efficiency in political decisions
- provide a more realistic structure for results of AlgorithmicEpistemology
- since its result would be information that requires to be stable enough to be built upon
- allows to look for non-obvious or explicit forms of control
- e.g. through recent information networks (i.e. the Internet) and their evolving set of actors (i.e. governments, companies, ...) and rules (i.e. exploitation and regulations)
- automate and thus allow to delegate the detection process and integrate to a personal security system
- submit results beyond this wiki
- Complexity
- Physica A
- Science
- Nature
- does it also drives Wikipedia:Allometry#Allometric_scaling? is it at least compatible with it?
- but then why would there be no quantum computing based OWF breaker? surely based on the arms-race process there should be organisms that would have leveraged it if it was possible
- especially since some organisms have already been shown to exploit quantum effects http://jdmoyer.com/2011/05/04/how-to-see-magnetic-fields/ and can perform computation through classical counting
- existing research of biomimetics applied to cryptography?
- overall there seem to be an implied consensus that organism directly consuming or leveraging by recombination simpler organisms (lower layers) are "positive" whereas simpler organisms e.g. cancerous cells, virii, ... leveraging more complex organisms are "negative"
- is it relevant? is it a viewpoint bias? can those be considered counter-example to theoretical economical impossibility to invert OWF in nature?
- overall critics of every challenge of the establish situation, e.g. species xenophobia
- every time an existing OWF is leveraged rather than entirely created a considerable competitive advantage is gained
- thus clearly giving an incentive for re-use and building on top, combining and encapsulating
- rather than solely maximizing hardness, it is possible to look for a distribution of hardness per energy spent (or cost)
- one could imagine a distribution (e.g. gaussian in particular with expected diminishing returns) with a "sweat spot" amongst a spectrum of potential hardness which is neither minimal (too weak) nor maximal (too costly) but rather just economically sustainable
- avoid confusion between
- executing the invert of a function
- inverting a function or determining the invert of a function (sort of reverse engineering)
- describing the invert of a function (complexity of the representation, a la Kolmogorov complexity)
- one could assume that Kolmogorov complexity ∝ cost of inversion
- consider how apparently different structures at different scale, e.g. Museum#Teotihuacan, seem to have architecture isomorphisms
- one could consider Society itself as a processor, especially according to my Beliefs#B1 thus with its upper-most layer as an equivalent of "software"
- if so, what would be its instructions set? what would the links between policy and science be?
- see also min 8:30 TEDxZurich: Who Controls The World comparing economical network with urban organization
- Musk's Gigafactory with giant CPU inspired design
- in an evolutionary and thus phylogenetical and ontogenetical in which there is a continuum of interactions and processing, the theoretical maximum hardness might be less significant than the cost of hardening (which is not necessarily linear)
- if substrates have maximal theoretical pace of evolution yet the organism is still taking part of an arm-race, the transition to substrates allowing to gain a speed advantage might happen by default
- e.g. slow genetical evolution in humans but fast cultural evolution (epigenetic becoming epi* or epilayer)
- intuitively "cross every layers" can be seen as tools to the uppermost layer
- yet can this be justified? is it always the case?
- are those tools to be even considered as layers in the first place since they are abstractions?
- are artificial attempts like Wikipedia:Program synthesis, Wikipedia:Evolutionary computation or automated Wikipedia:Proof theory remaining inefficient precisely because they are not taking into account the constraints that non-artificial processes are taking into account solely because they have emerged in a very restricted environment?
- i.e. is there an inverse relation between how high a layer, how generalist it can and thus how important embedding the right constraints is to remain efficient?
- as André Malraux's "On ne s'appuit que sur ce qui resiste." what seems initially like a curse might be a blessing (physically constraining environment allowing to explore a tractable space of potential solution) and what seems like a blessing might be a curse (abstracting aways from physical constraints to efficiently allowing it explore a yet intractable space)
- simulation solutions that aim to reconstitute a realistic environments (e.g. Brest lab.) might work but will have to be compared in efficiency, if more resources are spent to re-enact the original computation and if no structural rule (or explanatory power) is extracted that would facilitate the generation of others solution, then the whole process might be a waste
- consider also http://www.agi-wiki.org/Tests/Tests and OwnConcepts#gToM economical aspect, in education in particular tests are administrated to make the pupils progress and "route" them properly, but those have to be cheaply, even if over time then can be inverted and product test-matching pupils rather than learning pupils
- inspired by "Thus, the crux of our explanation of the difficulty of creating good tests for incremental progress toward AGI is the hypothesis that general intelligence, under limited computational resources, is tricky." (emphasis added) WithoutNotesJune11#EvaluatingProgressTowardAGI
- a form of diminishing return
- hence AGI could be defined as the most efficient solution to solve the most generalist set of problems (at this point this should be move to another page as it does not really use or clarify the topic of this page)
- explore Unexpected Union - Physics and Fisher Information: An uncritical review of the same book and an introduction to EPI, SIAM News 2000
- consider the evolution of layers over time, e.g. cosmogenesis, a la evo-devo
- if there is no straight forward way each layer can have appear over another then the model becomes questionable
- most likely it is precisely the change of topology on the underlying layer that lead to an opportunity for the higher layer, cf. the visualization
- e.g. the political layer made no sense at the early age of the universe but was also no possible to form
- are there specific phenomenon on each layer, e.g. the first auto-catalysis, the first cell membrane, etc...
- which might also constitute patterns, if so, this should be expressed within the Model
- consider Adversarial Machine Learning, ACM Workshop on Artificial Intelligence and Security October 2011
- http://blog.computationalcomplexity.org/2003/09/one-way-functions.html
- could it help for AIW01#ConstructingKnowledgeInCommunity
- consider if Galois' field (or Wikipedia:Finite field are related to this problematic
- cf WithoutNotesJanuary12#ScottRickard and Wikipedia:Costas array
- can WithoutNotesFebruary12#MicrobesAndMentalIllness, in particuar WithoutNotesMarch11#OphiocordycepsUnilateralis be considered
- counter-example?
- limit rare cases to explore?
- check http://cstheory.stackexchange.com/questions/tagged/one-way-function
- epistemology+one-way-function
- http://lesswrong.com/lw/54q/cryptanalysis_as_epistemology_paging_cryptonerds/
- consider if neural networks are one-way functions
- training is expensive but execution is cheap
- consider Wikipedia:Assembly theory
ToRefactor