On the one hand, I really liked this story. It was short, to the point, and thought-provoking. I’m very interested in singularity, but haven’t read a whole lot about it. I thought this was a good primer for thinking about that. I have been wanting to study the history of mathematics, physics, and the life sciences in more detail, and this made me think that maybe I ought to read more historical science fiction to introduce myself to the history of science–so I’m grateful for that.
On the other hand, it struck me to be a little bit overly optimistic about our relationship to technology, and I thought that maybe the moral of the story was a bit politically problematic in that he masks over the development and change of humanity over time opting for a view that very literally ignores difference at the cultural, political, social, and individual level. And in this story, it appears that all such issues play no role, whether they were meant to have been resolved or were just backgrounded. This leaves only a narrative of human beings increasingly without worlds and a bit too reliant for my liking on a self-programming robot god–maybe it just sounds too much like a metaphor for capitalism + rebirth religiosity for my Jacobin sensibilities.
The other issue involves plot development. If we are operating on the assumption that the machine and the people required energy to survive in the beginning of the story (and more of it to maintain hyperspace travel, right?), the ending of the story totally deflates that premise and conflict. While it can elicit awe to have obscure, novel language and ideas in a story, a lack of plot development and conflict seems to deaden it–making the story of our apparent endless future pretty damn boring. I get that the main plot conflict is the epistemological block of *knowing* the outcome, and this is overcome, but I feel kind of lazy to let Asimov out of it without addressing the physical problem of a super computer in hyperspace using up energy then regenerating matter, etc. (maybe it stored up the energy? Still, it would have had to program itself a whole new capacity for generating matter or whatever). Furthermore, if the machine only did need to provide an answer and energy is no problem at all, then why wouldn’t it simply allow “Man” to recognize the answer to the last question and maintain its form as a combined-mind AI entity?
Another interesting way of thinking about this story is in the way it deals with mind. I am rather partial to the philosophers of embodiediness in France that were very active at the time this story was written, and I think they present good reason to think that we are embodied beings through and through, which is to say that *we* are not minds or souls distinct from bodies and that, in fact, our being is ontologically based in our embodiedness and worldliness. Asimov presents mind in a different way, and I want to say that, even positing its, let’s say material, possibility, it seems odd to think of *human being* as being without any world or body.