Let us say I visited a beautiful garden in the afternoon, and now I am lying in bed visualizing my favorite flower from it. Let us say that I am particularly struck by the color of that flower. When I visualize the flower now, I do not just see the flower’s face, that is, the flower’s beautiful petals, which is what I liked the most. I also see its stem, leaves and possibly all the other flowers in that garden around it or in the backdrop. Here I can say, for I know, that actively, consciously, I visualize just the flower’s face, because what most captivates me about it is its vivid and bright color, but, as if by a trick of my mind, inactively, or by my unconscious, not only does the face of the flower come before my mind’s eye, but so does its surroundings, as if to frame the flower, to add to an overall aesthetic effect.
What seems to be present in my ‘thinking space,’ or my space meant for visualization, that is, what my mind’s eye sees, is more than what I actively think before it, more than just the face of the flower I like. Now, there is usually some consistency to the images that come to surround the flower—it seems to borrow very much from the reality I have lived in. Why, I may ask, do I see the stem, the leaf, the garden and the skies, and not flames of hell around the flower’s face, or a puddle of dreary mud, or, even if we are to not be so aesthetically displeasing, a gentle rain? Why do I not see something random in the backdrop, like a beautiful rainbow? It seems that there is an element of consistency even to what is inactively thought.
The objects and colors that surround the flower’s face are not accurate representations from my memory, of what I saw in the garden that afternoon. The stem, other flowers and sky, are, in one sense, stock images, but in quite another sense they are not even that standardized and transferable between different visualizations—perhaps if I think of a beautiful girl’s face on some other evening, there will be a completely different sky in the backdrop, not to mention quite randomly assigned, but aesthetically pleasing, clothes on her body. It seems that if I were actively thinking up the backdrop along with the face, I would be too consumed with imagining the details of the backdrop, and too concerned with it, so much so that that feeling of relaxation or, in the case of the girl’s face, excitation, would give way to an artistic process of trying to form the perfect image. We know intuitively that such a ‘perfect painting’ is completely beside the point when we imagine something—the backdrop is beautiful, true, but remains, resolutely, and much to our pleasure, a backdrop, so that we may focus on the foreground exclusively.
Given my limited powers at meditation, or perhaps by a general tendency to replace one image by another, whenever I turn to the backdrop, whenever I try to focus on the stem or leaf to appreciate its beauty independently of the flower’s face, something happens and that particular image is lost—any focus towards the periphery dissolves the whole image, including the actively visualized flower’s face, from my mind’s eye’s field of vision. To move the line of gaze from the central object to its backdrop seems not so much a perfected feature of the mind’s eye, while it is more of a natural rule to move to another image with the backdrop now a foregrounded central image, with yet another backdrop to it that goes neglected.
Therefore, it seems to be the case that to be conscious of something—here, to actively imagine an image that gives me pleasure—seems to always involve an inactive and unintended element in the backdrop. This is true even when I imagine the most simple thing—like a bright red circle: it rarely appears just as a bright red circle, for it is sometimes on a wooden table, or in a field of blackness that is not just the blackness caused by the closing of my eyes. As as by a feature of my mind, something between intended image and unintended image appears before my mind’s eye, and it is not an anomaly, it is a mundane regularity.
The image, both its focused and unfocused bits, significantly lose their intensity, or we significantly cease to be enchanted by it, when we open our eyes and perform the tasks in the world beyond the mind’s eye’s dark canvas. Would we be able to move on from our images in the mind’s eye if they were too elaborate, vivid and enchanting? Perhaps we would, but perhaps our engagement in the real world would still be much too impoverished.
We seem to be not just ‘thinking beings,’ as a Cartesian would perhaps have it, but ones who are hard-wired into having such an interior mind’s eye space for visualization—equipped with backgrounding and foregrounding—that we actively think for an appropriate amount of time and, in the context of the dimensions of that mental canvas, an appropriate amount of space.
What Will The Machine See In Its Canvas?
The assumption I make is that an artificial intelligence machine will be designed by us based on the nature of thinking that a human being has. This must mean that AI developers take the whole image they have imagined—the flower’s face, stem, garden and rain—and create the condition for such an image to develop.
The AI machine will be too actively consumed in its mental activity because it will have to imagine a picture too intricate and detailed, taking up its energy, and, more importantly, its time, in which it could have done something else to develop itself or help in the real world.
Moreover, the AI machine will diverge from the human thinker, who, even if told of a perfect image by me about what came before my mind’s eye, will still likely find a backdrop to it that will enhance the pleasure or displeasure he or she feels. The backdrop is a mood setter in us.
The backdrop humans imagine may just be filler material or stock imagery that is a practical effect of our visual thinking as long as it does not need to paint the whole canvas. But the AI machine would not be able to make a choice between when to imagine extensively, in terms of space and time, and when to be more rapid and austere—to just get the job done. This could be a significant impediment towards our recognizing of it as an intelligent thinker—it would be too lost in thought.
So we have a problem that is at first glance simple—the difficulty of programming or instilling in an AI machine a visual language, a visual series of signs so that it can make decisions in the real world. It seems, at the moment, quite solvable a problem—if it can understand voice and alphabets, it should be able to understand visuals as a part of a sign system. On the other hand, the backdrop to a human’s visualizations are not really part of a visual language at all, but appear automatically, and many a times we pay absolutely no attention to it while making decisions in the real world; our decisions are based usually on the foreground, not the backdrop.
No comments:
Post a Comment