Back to Research.
Phylogenetic flow programming (ΦFP)
A programming language that doesn't change
the way you think is not worth learning.
Principle - Minimal Properties - Specifications - Specificities - Implementations - Current Usages - Possible Usages - Inspired By - See Also - To Do
Rather than an object right now it is fundamental to be able to leverage the history of that object and in a coherent and efficient manner. Phylogenetic flow programming aims at doing so by providing a structuring phylogeny atom (object as well as function) which is part of a larger program in which this atom is nothing but an information flow.
- date of birth
- executable code
- which can also be just outputting data (like text, number, blob, ...)
- history of changes
- resources required?
- eventually rather defined by the closure it is in
- eventually rather defined by the closure it is in
Warning based on Wikipedia:Duck typing without functions used on it, does it actually make it a "proper" atom?
- add fundamental functions
- distance from other atom
- list versions in history of chances
- acquire version X in history of chances
- revert to it
- is that actually interesting without forking?
- fork from it
- run it without reverting it
- structurally leveraging
- data flow?
- ideally facilitating
- the environment or closure itself should be an object or atom moving through time thus also an ΦFP atom
- time reversible
- constant value
- deterministic value
- example of a mathematical curve
- this can arguably be defined as a constant
- time irreversible
- produces too much data
- the code of the simulation is not the same as its output (which could be potentially extremely larger)
- example of fractal as code vs rendered fractal
- the history is not handled by the system but the programmer can still manage it himself
- unknown/to define
- "normal" phylogenetic flow atom
- history kept through code change
See also recent request to RecordedFuture on their definition of "Math of time"
- possible to query an entire "time slice"
- executing a neural network
- at time t-1
- a sub-network of it at time t-1
- compare with the entire current execution
Do not forget the limitations and thus the usages which will probably not be "economically" interesting/efficient.
none as of end of July 2010
Consider doing a "paper" version (possibly post-its) with printed sheets with the default structure of a phylogenetic flow atom, an entire program, etc... to use it without a computer.
- for each part of the SoftwareStack (including programming language), find the existing component with
- most of the hardest to implement foundations requirements
- the biggest ability to easily be modified
- the community building the more efficiently manipulate the foundations
Including Wikipedia:Programming paradigm and its Wikipedia:Template:Programming paradigms.
- none as of end of July 2010
- my failure to represent such a model years earlier through arithmetic
- probably no intrinsic to the formal notation but rather my education (and thus bias) in computer science
- time in particular was really complex to handle in a way that was practical yet non tautological
- Hypothesis#virusmodel regarding "continuous" flow
- collaboration with Dira on the limitations of computer science languages in late 2009
- biological phylogeny and computational phylogenetics
- evolutionary epistemology
- massively distributed programming
- generalization ofCVS/SCM/... applied to wikis (in particular with RiverOfTime )
Written down the 24/07/2010 at about 8pm on the banks of the Marnes while listening to Red Room by Dennis Ferrer and reading Turtles, Termites, and Traffic Jams: Explorations in massively parallel microworlds by Mitchel Resnick. Cf From theoretical epistemology to paradigm as tool
- explicit the hypothesis, predictions, needs it relies on
- why does it actually worth spending time on it? What new discoveries could change that (in a positive or negative way)?
- macro perspective / long-term
- apply to
- new data
- the environment, closure, VM also is a phylogenetic flow atom
- own data
- in a distributed manner, a la map/reduce
- existing well formatted data sources
- especially bibliographical databases but overall Datasets
- other data sources
- micro perspective / short-term
- live programming
- compare micro and macro perspectives
- can it be scale-free?
- how can that be leveraged?
- can key physical structures be efficiently modeled and used?
- human brain (including corticogenesis or ~EEM)
- epistemology (~EET)
- software projects
- including Linux kernel and GNU/Linux distributions (as "meta" software projects)
- financial market (including behavior economics)
- items of LayeredModel
- emergent social behaviors
- itself, the phylogenetic flow paradigm
- including its implementations and usages (and failures)
- can simple technological communication system be modeled and used?
- that I master and enjoy
- that I am not used to
- results of micro and macro perspectives import should allow export in the original format of their sources (even if data loss happens)
- probably mainly wikis and CVSs
- integrate BackEnd
- make sure that the vocabulary that does not come from computer science (mainly biological vocabulary) is used properly
- slide presentation to a fictional crowd of
- pedagogical and cognition perspective
- going further than the computational perspective
- simulate "offline" programming
- helping the paradigm to be efficient without a requiring a computer to program
- short-term memory and the 7 +/-2 heuristic
- nature of the project
- programming paradigm?
- cognitive framework?
- a message between more than 1 atom can also be a ΦFP atom
- define the basic sets of functions specifically relevant
- "utility" functions that allow
- manipulation of the flow
- change comparison
- change apply/revert
- atom merge/fuse
- test how easy it is to build evolutionary algorithms (EA)
- add notes from T,T&TJ book
- to explore
- is time a specific dimension in the phylogenetic flow atom?
- could it also be abstracted and generalized?
- would it be useful and efficient to follow the same principle in highly dimensional phylogeny?
- study key underlying mechanisms and the evolution of their implementations
- inferring phylogeny
- how does it integrate with the semantic paradigm?
- wiki to semantic-wiki? web to semantic web?
- are those "just" meta-data to generate, handle then exploit?
- can it be easily integrated or is it a radical change? how so?
- MPprogramming.com Resource for Multiparadigm programming techniques
- leverage FB_Wiki:Tools/Programming#EntireStack to know precisely where to try and apply the engineered solution
- for large non textual data or blob consider history only on meta-data
- e.g. history of EXIF not on the picture itself
- find the unifying principle
- wiki using DVCS backends
- DVCS using extensions
- can we say DVCS + HTTPd = wiki + CLI ?
- check hybrids
- Hatta Wiki wiki engine – software that lets you run a wiki. It requires no configuration and can be easily started in any Mercurial repository.
- scan own
- on flow look for Gremlin, Yahoo! pipes, DERI pipes, graph in wikis
- discussion with CobraCommander on freenode, 04/08/2010 at 14:09 CET
- clearer foundations
- good practices friendly
- debugging (inc. REPL)
- stack tracing
- live documenting
- are complex mathematical models like Navier-Stokes useful for dataflow programming? required? is it more of a strength than a weakness?
- consider being a unifying overlay first then and only then, if there is too much internal hooking re-work from the bottom-up re-writing down solutions
- not re-inventing the wheel
- Cyberinfrastructure for Phylogenetic Research (CIPRES)
- consequences for software engineering
- working toward a continuous cognitive process (as a flow)
not a final information-based product
- what can be learned and applied from biological unfolding processes? ontogenesis? corticogenesis?
- leveraging new hardware?
- quantum computers (ask Suzanne), FPGA/OPGA (ask Nicolas), Probability Processing Circuits, ...
- older work
- stringed papers model (stored in Langrolay)
- export back to
- PmWiki to test compatibility
- integration with existing instance
- cognitive flow <=> thoughts
- cost of switching tasks
- are the tasks compatible? is there a closure? what the time to switch context? etc
- data flow just as well as function flow
- large distributed incremental processing
- Hadoop Online Prototype (HOP) modified version of Hadoop MapReduce that allows data to be pipelined between tasks and between jobs.
- stream processing : MapReduce jobs that run continuously, processing new data as it arrives
- see also 7.1 Parallel Dataflow and 7.3 Continuous Queries in the NSDI'10 paper
- mention of Telegraph's FLuX (Fault-tolerant, Load-balanced eXchange)
- paper concluding on a potentially new paradigm required for cloud/multi-core computation
- overall Online aggregation (which seems different from sampling) could be very important in a flow oriented system in order to know how to assign resources
- Nova: Continuous Pig/Hadoop Workflows, SIGMOD 2011
- objects defined as things that do not change despite a shift in perspective
- temporal data visualizations
- explore Jon McCormack's work