A Combinator based development stack
Elimenate the Von Neumann architecture bottleneck between CPU and main memory.
It's been a number of years since I finished my Computing Science degree at the University of Technology, Sydney. I'd been inspired at the time to explore functional programming and I spent every opportunity I could coding in a language based on Combinators called Miranda (pre-Haskell).
It was well known at the time that while Miranda was elegant, it wasn't fast when it came to execution. It had some commercial success but was mainly used in academic institutions. Miranda was closed source until recently so fixing the execution speed would have required reverse engineering at the time. The implementations of the memory allocation was the perceived bottleneck.
An anniversary of Combinators inspired Stephen Wolfram to revisit Combinators and explore them further. I decided to refresh my theory and application of lambda calculus and combinators and the origins of fundamental logic to see if there was more that could be explored.
I was particularly interested in SKI as atomic Combinators that could be hardware optimised for an application like machine learning, AI, and large scale data analytics.
This is a search down the path of computation that wasn't taken. "I would rather have today's algorithms.." is an inspiration to look again at Combinators again to see if they're fundamental nature can be combined with modern hardware acceleration techniques.
The question is if I can apply a current algorithm to vintage hardware designs, even ones that were only conceptualised, and still go fast?
Can I apply roads of computation not taken because of a lack of methods available at the time but later on have become available but haven't been re-applied?
Can I go so minimal in the implementation that the design looks like that of a simple puzzle or toy system?
How do I make a minimal implementation that provide maximal functionality?