Remove bottlenecks from non-dedicated hardware and software tools aimed at being cognitive supports (cf RencontreAFTParis#ArgumentCentral).
Evolution of the needs
Run dedicated software on hardware starting by removing each most important bottleneck step by step (mimicking agile software development).
- check if the tool is first correct
- useful against the goal is helps to achieve
- consistent, not data loss, etc...
- then and only then consider (costly) optimization
- see also LeanThinking
- locate the latency bottleneck
- list components of the chain
- network access
- local access
- data validation
- list average delay for each component and transfers
- find which specific usage requires it
- all functions
- automate the process
- warning: the quantitative nature of the study is a risk
- even through multiple variables, if those are not corrected selected, one would "optimize" in the wrong direction
- run the automate process periodically
- log the quantitative result of the process
- provide visualization of results over time for easy comparison
Integrate with Point & Click, When Thinking Stops and Seedea:Research/BacktrackLearning
Check Programming#Profiling and Numbers
Explain and generalize
- logs the process
- apply same principle and adapted tools to other usage
- cf Education
Initial asynchronous version.
- Metaboard from MetaLab
- consider fixing Apple iPod Nano with Rockbox Open Source Jukebox Firmware
- reverted-finger shape
- sensitive to touch for clothes
- consider "modes" a la Vim but for context, for example morning mode to do action meta x A1, A2, A3, A4, afternoon, meta x B1, B2, B3, B4, etc
- probably requiring feedback to show which mode you currently are in
- pico to keep it tactile
- plastic case
- reverted-hand shape
- minimalist for pocket
- taking the social side into account (avoiding social disruption)
- interface with existing wikis (gn5)
Add discussion with Franck on usages. Also consider simply your data as point under or above one or two baseline curve. Each button your press (or don't) compared to the current time is used to say if you are on the right path (according to your metrics) or not.
- synchronous usage
- brain imaging through FPGA for low-latency signal processing
- Warning : is this actually the bottleneck?
Directly imported from Needs#WikiOnAChip
- what would be the benefits (and trade-offs) of embedding this wiki on dedicated hardware?
Second phase, integrate with Seedea implementation to actually justify the ExoBrain name.
Constraints (ordered by importance)
- cost, baseline either
- nothing (no tool used)
- pen and paper (cost close to 0).
- single of point of failure
- energy requirements
- consequence of non-availability of the tool for random period of time
- see also Low-quality network copping mechanisms
- weight and size
- equivalent to physiological energy requirements and flexibility
- energy requirements
- see also reliability
- Input module
- see above, mainly mobile (resistant, small, light-weight, low energy consumption)
- core module
- see above (how does it actually relate to the computation infrastructure?)
- computation infrastructure
- farm of GPUs (computations of large matrices or computer vision with e.g. OpenVIDIA)
- FPGAs (low-latency signal processing)
- linked through
- optic fiber (fast) or
- quantum link (arguably secure)
- key topological (thus also geographical) position in the network
- for FPGA/GPU/... and other specialized hardware consider collocation in a datacenter
- "it is advantageous to locate your servers near an Internet backbone as close as possible to the exchange on which your trades will be executed. " (p77, chapter 4 of Quantitative Trading).
- routed by OpenFlow devices (very flexible)
- detect tasks with large dataset size but no latency requirement then package and send to remote services
- face-to-face collaboration
- GE GPGPU CUDA-enabled rugged products