As humans, we are severely limited in our capability to understand, visualize or perceive dimensions beyond the spatial 3 dimensions. We can also perceive changes in time, which gives us the ability to visualize 4-D changes and patterns. However, it is extremely unintuitive for us to understand spaces that are any higher than 3D. I remember reading about this first in Michio Kaku’s Hyperspace, where he constructs an analogy between us and fictional 2D beings, that can only perceive two dimensions. These beings would look at everything in 2D, so we (3D beings) would look as contours constantly changing shapes, since they are projected onto the 2D plane that those beings can see.
Today, since we have very different kinds of data that is often high dimensional – “HD” – (different from Big Data!) there is an increasing appetite for methods that allow us to visualize patterns and changes in these HD spaces. Areas such as topological data analysis (TDA) attempts to solve these problems using tools from Topology, a field in mathematics that has been very popular since the 1930s. Ayasdi a data startup founded by Gunner Carlsson of Stanford, attempts to provide solutions to data problems using topological tools, it recently raised $55M in funding, and is seeing ~400% growth!
High dimensional data visualization is important and needs more attention from mathematicians and engineers. Interestingly a recent TED talk this year spoke about Sensory Substitution, which is basically the process where you take high dimensional real world signals such as images, audio etc. and map them to a much lower dimensional space and feed the low dimensional signal to the brain via different sensory inputs such as electrodes on the tongue, or tactic feedback on the skin etc. The claim is that the brain can automatically learn to “see” or “hear” given a few weeks of training with these new “eyes” or “ears”, since all perception happens in the brain. It is very inspiring and gives a lot of hope for people who have one or more of their primary senses which do not work as they were designed.
Now imagine, instead of “visualizing” 3D data all the time what if we could map dimension #4 to an auditory signal, #5 to a tactile signal, #6 to some other type of signal and have a human “perceive” the data (here I refer to perception as the generalization of visualization). How effective are we, as humans, in obtaining all this information at once and understanding the patterns in the high dimensional space? We will encounter cognitive overload, where beyond a certain point our brain cannot make any sense of the patterns. But as long as we operate within that limit, it will be very exciting to see how we understand 6- dimensional data. An added benefit is that we can map very HD data to 6 dimensions which a whole lot better than mapping it to 3 dimensions (see curse of dimensionality).
Going even further, if we perfect the art of data perception in HD spaces, can we also perceive why some machine learning algorithms fail and some don’t? Can we perceive boundaries between classes in massive datasets? The interesting thing about these questions is that I am not sure who could answer it better – a neuroscientist or an engineer!