After I published the videos of the project qp, I got many messages to ask me where to download the program. The truth is I don’t share it to public. Actually I had planed to release it. But after I asked some friends to try it, the feedback is it is hard to use. Because it is just a concept prototype, I focused on the possibility of this idea and ignore anything else. Right now, it is not easy to configure and the user experience is still not so good. So I think I’d better not release it until it is easier to configure and has better user experience. Please be patient!
This blog shows the details of using QP to change a smartphone into a wireless mouse.
1. A computer installing Linux and with WIFI available, the faster, the better. My laptop has Kubuntu 10.04, 64-bit installed and its CPU is i330, 2.13GHz. I tried QP win version and smartphone but failed. After searching on the web, it is said that the ffmepg of opencv has some problem on windows, so win version has no such function.
2. I compiled OPENCV with my configuration, including options of TBB for multi-core and FFMPEG for video processing and so on. I also installed those libraries.
3. A smartphone with IpWebcam or similar installed. I am using an Android phone. IpWebcam can make it as a stream server which changes the video taken by its camera into a motion jpeg stream and send out. I don’t have an IPhone but I think there must be some similar apps in APPStore, so IPhone should be OK with the corresponding apps.
Now, I do the following steps to change the phone into a non-touch wireless webcam.
1. Connecting the phone and the PC by WIFI. I’ve tried two methods, one is using the phone as an AP and PC to connect it; the other is using a wireless router as an AP and connecting the phone and the PC to the router by WIFI. Both are OK.
2. Running IpWebcam on the phone to start the video strem server.
3. Running QP on PC with server address as a parameter. Then QP can receive the video stream and process it. Now I can use it as a wireless mouse to control the cursor. 🙂
Here is a not-so-good video:
For better visual effect, please watch this one:
It’s really not easy to draw the smile. The configuration process is a little difficult. Some time, you need to guess where is the problem and how to solve it. WIFI is slower and longer delay than USB, which degrades user experience very much. And my smartphone is a cheap one, which has a camera with low resolution and not so clear. In addition, holding a phone is much more clumsy than a webcam. Anyway, I’ve done that! 😛
Frankly, I’m surprised by the feedback, which is much beyond my expectation. I just wanted to know if people like this new technique or not. Seems lots of dreamers in the world. 🙂 Awesome! Engadget! You know, when you are doing some new thing which only exists in sci-fi movie, especially alone, you need some advice, encouragement or even blame to push you going ahead. Now I get the information and know it is worthy of doing this. Thanks a lot!
There are some points I'd like to say:
- Clarke’s Third Law said “Any sufficiently advanced technology is indistinguishable from magic.” So I can understand somebody can not believe it is true. But it is true!
- Someone said he/she saw Kubuntu in the video. Bingo! There are two versions of qp for Windows and Linux. I used both versions to make videos.
- I didn’t write any handwriting recognition software, I just used the one of Sogou Input in my demo. And the word processing software is EIOffice, not Wrod. I list the software tools I used at the end of the software.
- Comparison to the Kinect.
- I deem it as a compliment. One year before, when I got this idea from the sky, I have already known Kinect, which is amazing. What I did is just made my idea come true. That is really a big satisfaction for me.
- Technically, comparing qp to Kinect is like comparing a toy car to a Benz. In my opinion, they are not antagonistic, just two different methods and experiences of HCI.
- I deliberately used an ordinary webcam to do this demo though it is a little bit clumsy. I wanted to show you that this new technique can change an ordinary webcam into a “magic stick”. That is cool! It is straight to redesign it as a pen-like or glove ,etc. But more important, I must prove the principle in it is available.
- As a concept prototype, it is far from perfect. What you have seen is just the best part of it. There are lot of things to do to make it a real product. Since it is only one year that I have learned computer vision, pattern recognition, machine learning, .etc, there are too many things need to study.
- There are some problems need to solve. Most important are motion blur and defocus. I am using some ordinary webcam, when I moving it fast, the images become blurred and defocused, which affects the following process. That’s why you see it is slow in the video. I need some real-time motion blur and defocus restoring algorithms which can be used in moving camera. Better in C/C++ and has examples. Anybody wants to help me, please contact to me. Thanks in advance!
Here are some suggestions for your first post.
- You can find new ideas for what to blog about by reading the Daily Post.
- Add PressThis to your browser. It creates a new blog post for you about any interesting page you read on the web.
- Make some changes to this page, and then hit preview on the right. You can always preview any post or edit it before you share it to the world.