Overall, I find the argument useful for its intended purpose: to make obvious the distinction between handling syntax and understanding semantics. Unfortunately, the most famous (in armchair IT circles) test for AI is the Turing Test, which is entirely based on the appearance of understanding semantics, regardless of whether there is the qualia of understanding. I think that Searle's point is clear, and must be acknowledged.
I can't swallow the whole argument, however, on the basis of qualia and the consciousness of a system. That is, I am an independent observer of the Chinese Room and have no access to the inner life of the total Chinese Room system. Therefore, I cannot conclusively say whether the Chinese Room system has conciousness. Even if I was the man inside the room, I wouldn't be able to make that evaluation, inasmuch as the neurons in my head aren't aware of my total consciousness. My qualia are only available to me. Yours are only available to you. It isn't possible to say whether the Chinese Room has qualia, because they're only available to the Chinese Room.
However, from a physicalist point of view (the most convincing metaphysics I've encountered) I must leave open the possibility that the Chinese Room has consciousness. Minds arise from physical matter, after all. So for me, it's reduced to a matter of probability. Is it likely that a Chinese Room system has consciousness? Not nearly as likely as a dog having consciousness, and probably only slightly less likely than a thermostat with consciousness. And for more on the thermostat, go read The Conscious Mind.