Whenever you read an article or a book about AI, 2 questions or arguments are, explicitly or inexplicitly, mentioned or discussed in it.
1. What is intelligence?
2. What is Consciousness?
In the field of AI, it is common to say that a machine is intelligent or comes close to human intelligence when it would pass the famous Turing Test. The Turing Test is as simple as it is ingenious. You have 2 test subjects, 1 human, 1 machine and 1 interrogator who interviews both. The machines task is to convince the interrogator that it is not a machine and the human test subject tries to convince the interrogator that he or she is indeed a human. The interrogators task is to figure out who is who. An intelligent system would pass the Turing Test if the interrogator refers to it as human (On more and better information about the Turing Test, read Turings paper: http://loebner.net/Prizef/TuringArticle.html).
I would consider the Turing Test as a valid indicator whether or not a machine is intelligent, however, I don't think even all humans could pass it. Take that example, we are not considering machines, who are able to solve painful equations and proof punishingly hard theorems as intelligent. Why? Because they are machines and are supposed to be able of such incredible math magic, AND because they won't pass a Turing Test. Just think of Deep Blue, an incredible chess machine, yet would not be able to answer the simple question: Whats your favourite music band? (Well, it may answer, "Queen"...;-)). Or take Watson, a machine capable of natural language processing, but I strongly doubt it would make it through Turings Sonnet example or even the chess example. Now to humans, take Grigori Perelman, or other mathematical geniuses. I really do doubt if they could pass the Turing Test. The main point is, we take those humans as absolute geniuses and often take their inability of participating in a normal conversation as another proof of their ingenuity.
Another aspect of why we consider machines as not intelligent are our expectations to the capabilities of machines. We often expect them to know everything and that instantly, which I think is just plain wrong. Intelligence is something that needs time to evolve, as in humans, humans are no nobel laureates by birth, neither are machines. Just take genetic algorithms or neural networks, both need time to come to correct results, but if you give them their time, they become truly intelligent.
But lets come back to the question of what intelligence really is. I think intelligence is a lot about diversity, great diversity, learning and a good amount of randomness. From mathematicians to painters to comedians is the form of how intelligence manifests itself in one person hugely diverse. Through genetic inheritance also comes a lot of randomness and finally our knowledge, which is a result of intelligence comes mainly from learning. At its heart, intelligence may be reduced to pattern recognition and statistics (= experience). When we have to make a decision, we try to find similar situations in our memory (pattern recognition) and try to recall our decisions of what had worked and what not, and derive from that an answer or a decision to our current situation (statistical analysis). The result of our decision, as well as the whole situation, is instantly stored in our memory again, thus, we have again learned something and gained more experience.
What about consciousness? Humans are aware of themselves, and they are aware that they are aware of themselves, and so on. But are we aware of ourselves when we are born? Or is our self consciousness simply a result of other people interacting with us, therefore an evolutionary process? We are a species capable of language, therefore our interactions can be considered as more complex than of say, cows for example. Is our infinite strange loop of consciousness simply a result of these high-sophisticated interactions? I believe it is.
What about the future?
I believe the Turing Test will remain valid as an indicator whether or not a machine deserves the honour to be called intelligent. But at a certain point we may have to fine-tune the Turing Test a little, for example to test the learning and pattern recognition capabilities of test subjects. For example, in a Turing Test interview, the interrogator includes the story of a little riddle (e.g. one of those milk can riddles) into a question about history. The interrogator also tells the test subject one solution of the riddle, but not the optimal one. A few questions later, the interrogator then asks that riddle question in a little different form, say, not milk cans, but oil barrels. For a machine, it would be a simple straightforward logical task to solve the riddle and find the optimal solution. However a human test subject, will recognise the riddle as the one just heard a few questions before and will most likely tell the interrogator the solution heard in the story told before.
With that point reached, Ray Kurzweils prediction about the point of Singularity will be REALLY near.