Search within Lanny's blog:


Leave me comments so I know people are actually reading my blogs! Thanks!

Wednesday, January 21, 2009

Random Thoughts: Worst products of CES 2010

CES (Consumer Electronics Show) is the world's largest consumer technology tradeshow where many latest technology innovation are revealed and exhibited, and this year's show is happening right now at Las Vegas.

I ran across an article in the Huffington Post listing some of the worst products of CES2010. I must confess that I am utterly amazed by the creativity and innovativity involved for some of the products selected. I'll show three of them here (also with embedded videos), and hopefully you'll enjoy them as much as I did! So here you go!

1. As Seen On TV Hat

Ever found yourself utterly bored from what you are doing and would rather watch your favorite movie instead? Now for only $19.95, there's a solution for you: A baseball hat that let's watch movies anywhere, anytime (as long as you are wearing the hat)! Now you can jog outside (a desert is recommended) while watching a movie; or you can hike that boring hiking trail and enjoying a thriller (stay away from the cliff, not that kind of a thriller); how about enjoying a comedy show while waiting for a deer to show up near your hiding spot in the hunting trip? The possibilities are limitless -- that is, if you don't mind looking a little bit, well, how to say it -- out of place!


 
(Photo Credit: Endadget)




2. Phubby Wrist Cubby

Ever felt sad, depressed, distressed, because you couldn't feel the vibration of your phone and miss important phone calls? Were you ever mad at yourself because you couldn't find your cell phone or ipod? Ever felt disgruntled because you missed your better half's call while playing soccer or football? No worries! For only $12.95, your problem is solved! You can now carry your phone (or your keys, or your wallet, or your change money) anywhere doing whatever!

(Disclaimer: you are solely responsible for whatever you carry in your Phubby if you decide to shower/bath or swim)

What's even better: you can put rocks there to strengthen your arm muscles. You can even carry your pet bird or turtle with you anywhere you go, and you can even feel their heartbeat (they won't suffocate because they can breath through the holes)! Well, on second thought, I don't know if you'll be able to feel the heartbeat of your pet turtle. To make it even better, you can pick your favorite color or pictures for a Phubby Hip Cubby to carry your concealed weapon! What are you waiting for! Go to phubby.com and get yours!


 
(Photo Credit: The Huffington Post)




3. Android-powered Microwave

Ever craved for a machine that will let you browse the Internet for a picture of your favorite food, and then with a simple push of a button, cook it for you? Well, at least you can do the browsing part with this wonderful Microwave that runs Google's cell phone operating system: Android. Maybe this will help keep your better half stick around the kitchen area more often because she could browse all the wonderful recipes online? But wait? I am the one microwaving TV-dinner all the time. Where's Android-powered stove? Anyone saw that at CES2010?


 
(Photo Credit: UberGizmo)




If you want to read more about these uniquely interesting products exhibited at CES2010, see a slide show of them here. Or you can watch this video below, named "7 Weirdest, Wackiest Products From CES 2010", which covers some of them.



Video of the Day:

Since we are on the subject of CES2010. Here's a product people actually thought were very good: An undestructable hard drive that will withstand fire, water, drop, and a 35000 lb tractor. Here I present you: ioSafe!


Disclaimer: Criminals, don't use this hard drive!

Tuesday, January 20, 2009

Robot of the Day: UAVs at BYU

Since in the previous post I talked about a BYU UAV demo dry run, I thought it might be a good idea to present some of the UAVs we used at BYU for research purposes.

The research group WiSAR (stands for Wilderness Search and Rescue) at BYU consists of faculty and students from three research labs: The MAGICC lab (ME and EE departments), the HCMI lab (CS department), and the Computer Vision lab (CS department). The objective of the research group is to investigate and develop technologies to support wilderness search and rescuers with an Unmanned Aeriel Vehicle (UAV).

In the past, we have been using UAVs built by the MAGICC lab students. The UAV in the picture below is named Madre (meaning the mothership in Spanish) and was built by the MAGGIC lab. Madre retired in 2008 and simply sits on top of a closet in our lab for displaying purposes only.


 
Madre: UAV built by BYU MAGICC Lab


Some students in the WiSAR group graduated and then decided to license technologies from BYU and start a local company making UAVs. The company is named Procerus and has been quite successful. So later we simply bought a plane from them. The second picture below shows the current UAV we use. We just called it "The UAV" because we couldn't come up with a good name.


 
UAV built by Procerus. It doesn't have a name. We call it "The UAV".


The fix-wing UAVs we used in our research are small, light, and have wingspans of 42-50 in. Each weights about 2 lbs. They are propelled by standard electric motors powered by lithium batteries -- good for up to 2 hours in the air.

The sensors onboard include three-axis rate gyroscopes, three-axis accelerometers, static and differential barometric pressure sensors, a global positioning system module, and a video camera on a gimballed mount. A 900 MHz radio transceiver is used for data communication and an analog 2.4 GHz transmitter is used for video downlink. The autopilot was designed at BYU and built on a small microprocessor. It stabilize the aircraft's roll and pitch angles and attitude, and also flies a UAV to desired altitude or to a waypoint.

Each UAV has many autonomous capabilities. For example, it can auto-launch (all you have to do is to throw it into the air), auto-land (crash land after spiraling down), and if the UAV loses communication with the base, it will automatically return to base and loiter around. The video below shows the auto-launching and auto-landing capabilities of Madre.




The gimballed camera onboard the UAV provides bird's eye view of the area. Because the UAV can quickly get to hard-to-reach areas and cover lots of grounds quickly, the visual information it provides can help wilderness search and rescuers improve situation awareness and support in search of a missing or injured person. The next video shows the kind of video the operator can see from the ground. (You can skip to the end to see the crash landing.)




Maybe you have noticed from the previous video that video data from the UAV is not easy to use (jitters, disorientation, too fast, etc.). That's why our research group developed video mosaicing algorithms to piece video frames together to help with the search task. This method enables video frames to stay in sight much longer for the video observer, thus, improving detectability.




We have also developed other automation to assist with the search and rescue work. Examples include automatically suggesting likely places to find missing person, various automatic path planning for the UAV, anomaly detection algorithms, etc. Those will be discussed in a separate blog post in the future.

The video below is a compilation of some other capabilities of the UAV made by the MAGICC lab, including obstacle avoidance, multiple UAV formation flight, etc. Too bad the audio track was disabled, but you can leave the music running from the videos above and then watch it in rhythm. :) Note that at the beginning of the video, the UAV was launched from inside BYU campus. Of course, this is no longer allowed due to tighter FCC rules and regulations!




Picture of the Day:



People have always wanted to roam the sky freely like birds.
I don't, because I've got UAVs.

Monday, January 19, 2009

My Research: BYU UAV Demo Dry Run

Hi, everyone who reads my blog! Happy New Year to all of you. Wish you a very exciting and productive new year! (See picture of the day below!) I only have one New Year resolution this year -- that is, to catch up with the blog! :) That means I'll have to post at least two blog entries each day! So get ready for a flood of interesting (and hopefully insightful) postings. Also be prepared for the strange parallel time/space I'll be living in.

Note that I am starting a new track today called "My Research." Postings with this tag will talk about AI/Robotics research I am working on. Hope you find inspirations in these postings and comments are especially welcome for this track!

If you have not noticed, there's a section on the right side of my blog called "Blog Labels." This is a good way to filter out postings you might find interesting to you. For example, there's a label for each book I translate. The only drawback is that you'll have to read backwards. :) Also if you like my postings, please follow my blog (see right side). I am interested to see how many people really like my postings, and the more people liking my postings, the more motivated I will be! Okay, enough babbling, let's move on to the real fun stuff.

=============================================

Part of my research is about how to use an Unmanned Aerial Vehicle (UAV) to support Wilderness Search and Rescue (which we refer to as WiSAR). On November 14, 2009, our research group performed a field dry run in Elberta, Utah (a place in the middle of nowhere) in preparation for an upcoming demo for the Utah County Search and Rescue people.

Utahans love outdoor activities because we are blessed with lots of beautiful mountains and wilderness. As a side effect, there's also a great demand for wilderness search and rescue because people get injured/lost/missing in the wilderness. The goal of our research group is to use UAV technologies to support wilderness search and rescue operations. Obviously, real-time video from a UAV with bird's eye view can provide useful information for the search and rescuers, especially for areas that are hard to reach quickly. The UAV can also cover an area much faster than search and rescuers on foot. Our research group has been working on this for several years and made good progress. However, the technologies will only be able to make a difference if the search and rescuers find them useful and start using them. That was the reason why we are eager to do a demo for the real search and rescuers. And the purpose of the dry run is to make sure all technology components are ready.

The previous day's weather forecast predicted snow in the next day. Sure enough, when I left home at 7:30am, the ground was covered by snow. Elberta is about one hour drive from BYU campus. Interestingly, weather got better and better as I drove, and by the time I arrived at Elberta at 9:00am, there was no snow!


 
Elberta, Utah, early morning!


For our research, we use a fix-wing propeller-powered model-plane kind of UAV shown in the picture below. We also have a nice trailer, which has a power generator, some mounted LCD monitors, a long table, and even a microwave!


 
Fix-wing UAV and its container



 
Outside look of the trailer (showing the power generator)



 
Inside view of the trailer



It took about 30 minutes to get everything set up. Meanwhile, an umbrella (marking the location of the missing person) had also been placed in a distance from the "command post." By 9:45am, we were ready to throw the plane into the air (literally, that's how we launch the UAV, because the UAV has built-in intelligence for auto-launching).


 
Ready? Launch!



Inside the trailer, we have two laptops running. One laptop is used to control the UAV with a program called Phairwell (don't ask me. I didn't pick the name), where the operator can set waypoints for the UAV to follow (or a flight pattern). The operator can also control the UAV's heading, speed, roll/pitch/yaw, height above ground, altitude, etc., etc. Another laptop is used to view video feed coming down from the UAV. It is worth mentioning that the video frames are actually mosaiced together so the video observer can view a larger area while each video frame stays on the screen for an extended time for the ease of searching.


 
Laptop running the UAV control software Phairwell



 
Laptop running video mosaicing software


Amazingly, the weather turned into something perfect! There's nothing more we could have asked!

 
Sunny Elberta! What a beautiful day!


The dry run was quite successful. We performed several flights and fixed a few glitches, especially with the auto landing control. The picture below shows how the UAV lands (yes, it's a crash landing). The picture was actually taken from a previous field trial because it is quite difficult to try to keep the UAV in the camera frame.


 
UAV auto-landing


At 11:30am, just when we were ready to enjoy our lunch (subway sandwiches) after a successful dry run, guess what, it started to snow!!

 

We ended up packing everything first, and then had our lunch inside the trailer (aren't we glad there is a microwave in the trailer!). There's me packing in the snow in the picture below. Don't ask me why those other two were doing synchronized penguin walk in the background, cause I don't know!





That's it! We were fortunate enough to have a window of nice weather (against the weather forecast prediction) for the dry run, and we were ready for the demo!!

See the complete gallery for the dry run
Download geo-tagged photos for Google Earth view (double click the kml file)

Picture of the Day:



Wish you all a very exciting New Year! Hee-Ha!

Saturday, January 17, 2009

Paper Review: The Music Notepad

This paper is written by researchers at Brown University and published at UIST'98.

Notating music can be done with a common UI with windows, icons, menus, and point-and-click (WIMP) such as those used in popular software synthesizers and editing tools (e.g. Cakewalk). However, the user model of using paper and pencil is very different and is more desirable because of the simplicity. This paper presents a system that allows the musicians to create music notation directly using a stylus on a Tablet PC.

 

The system described in this paper followed some previous work from Buxton, but added more features. The notation system allows the drawing of: notation symbols, Beams, Accidentals, Clefs and key signatures. Editing included region selection (lasso), copying, pasting, and deleting (scribble or text editing type delete gesture). The user can also assign instrument and view more of the music score using a perspective wall metaphor.




The authors developed an alternate method for entering notes by "scribbling in" a notehaed. This is different from Buxton's gestures (which had bad user experiences). This allowed accurate placement of symbols because an average position is used. This is also natural to the user because that's how they do it on paper. However, this method could be slower than point and click and also does nto convey the note duration. The video below shows how the system works.



To evaluate the system, the authors asked some users to try the system and then performed some informal interviews.

What's great about this paper is that it is the first in using gesture recognition to tackle the problem mentioned. The weak spot of the paper is its evaluation. If a more formal user study is performed to specifically measure certain aspects of the user performances by comparing old vs. new systems, the results would be more convincing. On a side note, the paper mentioned about estimating probability of posted tokens. I wish the paper had discussed more about how probability is calculated.

You can follow this link to read more about this project at Brown University.

In my humble opinion, a good UI is one where there’s minimal amount of learning/training/practicing involved. To the user it almost seems that all the designs are natural and logical conclusions (based on normal experiences of a standard user – with a certain profession or within a certain era). There might be better and more efficient ways (e.g. I can type a lot faster than write, and my handwriting is ugly), however, it might take a lot of training and practice in order to achieve the efficiency. In such cases, the best thing to do is probably to give the user the options so he/she can pick the way he/she wants it. Some incentives (with proper tutorials and demos) might be helpful to try to persuade the user to move toward the more efficient method, so he/she will endure the (maybe painful or dull) training and practice for higher efficiency. The important point is to let the user make the decision himself/herself. A forceful push toward the new method will only generate resentment (e.g. Windows Vista).



A user judges a solution based on how easy it is to to use, not how great the designer thinks it is.



Friday, January 16, 2009

AI and Robots: StarCraft AI Competition to be held at AIIDE 2010

The Fifth Artificial Intelligence for Interactive Digital Entertainment Conference (AIIDE 2010), one of the conferences organized by Association for the Advancement of Artificial Intelligence (AAAI), will be held in October 2010 at Stanford University (as always). And the organizers have recently announced that they will be hosting a StarCraft AI Competition at the conference. AI researchers all over the world will have the chance to let their AI system compete in a Real Time Strategy (RTS) platform, and the final matches will be held live at the conference.

The idea of having AI agents compete with each other in gaming environments is nothing new. In fact, in one of the AI classes I took at BYU, we had to program agents to compete with other teams playing the game of BZFlag, a Capture the Flag game using tanks. The winning team gets an automatic A for the class. That was certainly a lot of fun, even though we didn't win the end of semester competition (because of a bug that confused our agents occasionally between home base and enemy base, doh!), we, as human players, had a hard time beating the agents we created ourselves.

In 2007, I went the the AAAI conference held in Vancouver, BC. At that conference, there were two live AI competitions. One was the General Game Playing Competition, where AI agents would compete in games they have never played before (all they know is the game logic at the competition time). The winning agent then played a game of Pacman against a real human player, and was able to force a tie! The other one was the Computer Poker Competition, and the winning agents challenged two real-world Vegas professional poker players with real money on the table ($50,000). Although the professional poker players narrowly defeated the poker playing software, the two players felt as if they were playing against real human.

What makes this StarCraft AI Competition unique are:
  • StarCraft is a very popular game with a commercial rendering engine and beautiful graphics.
  • It is a Real Time Strategy (RTS) game where the player controls many characters at the same time and had to manage game play strategies both at the macro and micro level.
The following video shows the kind of game play one would expect to see in StarCraft. Make sure you watch the HQ version in full screen mode to really appreciate the beautiful real-time graphic rendering.


Follow this link to get more info about how to use Broodwar APIs to write bots to work with the StarCraft game engine. If I haven't been buried in papers Piled Higher and Deeper, I probably just writing some agents for fun!

There are, of course, other commercial game engines used for AI and robotics research. For example, the game engine for the very popular First-Person Shooting game Unreal Tournament had been turned into USARSim (Unified System for Automation and Robot Simulation), a high-fidelity simulation of robots and environments.


Now my question is: when will EA Sports ever release APIs for their FIFA 2010 video game, so I can write software agents that play the game of soccer like real professionals (at least graphically)?



Picture of the Day:


 
BYU Computer Science Department Building
(See that big Y on the mountain?)

Thursday, January 15, 2009

Robot of the Day: Aida, Your Driving Companion

[Don't get confused with the dates. You'll find that I frequently travel back and forth through time -- in my blog. :) ]


Aida is a robot built by Mikey Siegel from the MIT Media Lab for a research project at Audi. It is suppose to be a driving companion, something to be installed in your car!

During the summer of 2009, when I was doing an internship at the Intelligent Robotics Group in NASA Ames, I met Mikey for the first time. He was on his way to Audi Research Center located at the heart of the sunny Silicon Valley to present the robot he had built for them, but decided to stop at NASA Ames first to show us the robot, because he used to be an intern here at the IRG.

The purpose of the robot is to experiment with the idea of using a robot to influence people's driving behavior. Researchers hope to use the movement of the robot (really just the neck movement), the different facial expressions, and the robot's speech to encourage people to drive more safely. This required the robot to be able to communicate with human with many social cues, which was exactly the research topic at the Personal robots Group at MIT, led by Dr. Cynthia Breazeal, Mikey's advisor.

According to Mikey, the robot was built within a three-day period (I assumed that he didn't really get much sleep), which caused all our jaws to drop. The lovely head was printed off a 3D printer, and he also machined all the mechanical parts himself. However, to the fair to the other members of his lab, he added, the neck design was a copy from another project, the animated eyes and mouth movements were created by a friend (if I remember correct, someone from Pixar), and the software control was a mixture of modules previously developed at MIT and open source libraries such as OpenCV.

When Mikey demoed the robot to us, Aida was able to recognize faces. It became excited when it was surrounded by many people, and acted bored when it was left alone. The animated emoticons projected onto the plastic face from the back of the head made the robot look very cute, and the smooth neck movement made it almost appear "alive". At that time, the only sensor it had was a video camera mounted on the base (not moving with the neck or head), but eventually, Aida will be equipped with more eyes (cameras) and ears (microphones), so it can sense the world around it better.




Having a cute robot interacting with people in their cars sounds very cool, however, I am not so sure it is such a great idea.

First of all, could it be possible that the moving robot might distract the driver with its cute winks? I couldn't help but remember those signs next to bus drivers I used to see when I was a young kid: "Do not talk to the driver!" These days, when many states are making it illegal to talk on cell phone while driving, what would they think of a robot that not only talks to the driver, but also try to get the driver to look at it?

Secondly, don't you get annoyed sometimes when your better half keeps criticizing your driving skills (or was that just me)? Now imagine a robot, nagging constantly right next to your ear like your dear Grandma, telling you that you are driving too fast, or that you hit the brake too hard. Especially after you rear-end someone, I am sure a nagging robot saying "Told you! Told you to not follow so closely" would be the last thing you want.... (Disclaimer: I have never rear-ended anyone!)

On the other hand, for those LA solo commuters who get stuck in traffic many hours regularly (I was recently stuck in LA traffic for hours, so I know!), Aida would make a great driving companion! And I certainly wouldn't mind such a cute robot making a conversation with me, while my car drives itself to my intended destination!

Video of the Day:

If you were there at the Liverpool Street Station on January 15, 2009, would you have joined in?

Tuesday, January 13, 2009

AI and Robots: Highschool Students Register With Their Faces

In a previous post we discussed challenges to facial recognition apps and what people had to do (or choose to do) to get by (or bypass it). Does that mean the technology is not ready for the real world? Today we'll see a case where it is used in real world environment and is actually working quite well.

At the City of Ely Community College in UK, sixth-graders are now check-in and out of school registers using their faces. The facial recognition technology is provided by Aurora and the college is one of the first schools in UK to trail the new technology with its students.

So how does the technology work? The scanning station is equipped with infra-red lights and a regular video camera. Each infra-red sensor actually has two parts: an emitter and a receiver. The emitter shoots out an series of infra-red signals and the receiver detects the infra-red lights deflected back by objects in front of the sensor (a simple example would be the auto-flushing toilets in public restrooms). Then by analyzing the strength and pattern of the received signals, the sensor can sense how far the object is from the sensor. This allows the scanner to create a range (depth) image of the object in front of it. So the resulting image is a 3D surface, unlike a regular 2D image from a camera.

Combining this 3D surface with the 2D image taken from the video camera, features are extracted from the entire data set, then each set of features is tagged with a student ID (we know which face it is because each student has to be scanned at the very beginning so the data can be stored in the database). At the time of the scan, it is a simple machine learning classification problem, and I suspect that they probably just used nearest neighbor to match features with an individual student. You can click the image below to see a video of this from the original news article.

Click image to see video.
So how do people like this high-tech face recognition system? Principal Richard Barker said:
With this new registration technology, we are hoping to free up our teachers' time and allow them to spend it on what they are meant to be doing, which is teaching

As for the students, they love the idea of taking responsibility for their ow n registration and using Mission Impossible-style systems.


So why did this specific application turn out to be a success? That's the question we really should be asking. I think we have to attribute the success to the following factors:
  • This is a combination of 3D depth image with a 2D image, which allows the creation of many features (and some of them got the job done).
  • The college has a relatively small number of six-grader students. Classification becomes easier when you don't have to recognize a face out of millions of faces (like in the airport security check case).
  • The student is also required to enter a pin. This further improves accuracy. I guess the facial recognition technology is really there to prevent students from signing other people in and out.
  • Most importantly, the consequence of errors is very low. What if a face is not recognized correctly? The worst that could happen is a erred record in the registration. It's not like that the student would be marked as a terrorist at an airport, which could have severe consequences.
I certainly hope to see more and more successful facial recognition applications out there people can focus on what they enjoy to do instead of what they have to do.

Picture of the Day:

I think this would make a perfect picture for today.
Here I present: Lanny in 3D





Monday, January 12, 2009

AI and Robots: No Smile Allowed, When Technology Is Not Good Enough.

Since I've been struggling with my hand recognition application, which is far easier than face recognition, I thought I discuss some more about facial recognition applications.

In a previous post, I talked about how current facial recognition built-into laptops can easily be hacked. Today we'll talk about another real application of facial recognition, and specifically, what do people do when the technology fails.

About 20 states in the US use facial recognition technology with driver's licenses. To fight identify fraud, one standard procedure at DMVs is that the DMV employee would looked at the old photo of a person to see if it looked like the person seeking a new license. Using facial recognition technology, this step can be automated to improve efficiency, and the technology also, supposedly, allows the detection of facial features that are not easy to recognize by human, thus improve the accuracy of the detection.

The Indiana Bureau of Motor Vehicles recently rolled out a new set of rules governing how people must be photographed on their driver's license photos. Unfortunately, Indiana drivers are no longer allowed to smile. Smiling is taboo alongside glasses and hats.

What's going on here? Turned out the new restrictions are in place because the smiling can distort facial features measured by the facial recognition software according to BMV officials.

It is very interesting to see the kind of restrictions placed on users when the technology should have done the job. Here's something that for sure will improve the accuracy of the facial recognition even more: How about requiring all drivers to get a crew cut (men and women) and to be clean shaven?

I simply can't resist to show this picture below, which is part of the grooming standard in BYU's Honor Code, which I am openly opposed to.


Facial recognition technology was also tested at airports in hope to detect terrorists, but failed miserably, as expected.

"According to a story by the Boston Globe, the security firm which conducted the tests was unable to calibrate the equipment without running into one of two rather serious problems. When it's set to a sensitive level, it 'catches' world + dog. When it's set to a looser level, pretty much any idiot can escape detection by tilting his head or wearing eyeglasses."


The most popular facial recognition algorithm used today is SVM (Support Vector Machine) because of its good performance with real world data. The video below demonstrate how well the algorithm works (also using Gabor wavelets).




Anyway, I think there is still a long way to go for facial recognition technology to be useful in serious applications. Frankly, I am no good at facial recognition myself. A lot of times, I rely on hair style, glasses wore to help me remember people's faces. However, I don't think it is a good idea to impose lots of restrictions on the user because the technology is not good enough. That's my 2 cents.

Newton Moment: when you do things that are considered silly by normal people simply because you are too focused in thinking about your research.

Exceeding wife's tolerance threshold for the number of Newton Moment per day can cause serious consequences.



Video of the Day:
Try detect this face!



Sunday, January 11, 2009

Robot of the Day: G8 Robotic Fish to Detect Water Pollution

British scientists, specifically, researchers at University of Essex, plan to release a bunch of robot fish into the sea off north Spain to detect pollution. This is part of three-year research project funded by the European Commission and coordinated by BMT Group Ltd.



These carp-shaped robots look very much like the real ones, big ones (nearly 5 feet) -- roughly the size of a seal. The tiny chemical sensors installed on these robot fish enable them to find sources of potentially hazardous pollutants in the water.

These robots all have autonomous navigation capabilities, meaning no remote control is needed to direct them. All that is required is to simply "let them loose". Using Wi-Fi technology, data collected can be transmitted to the the port's control center. The battery on each fish can last approximately 8 hours and similar to the Roomba vacuum cleaning robots, they are smart enough to return to a "charging hub" to get recharged when battery runs low. The video below demonstrate the swimming capability of such a robot fish, the G8 model. It really swims like a fish!!



The fish can swith at a maximum speed of about one meter per second, which means the fish can be away from the "charging hub" for as far as 14.4 kilometers (which I think might be too far for the charge hub to still receive good signals). The cost for building one of such robot fish is around £20,000 (roughly $29,000), so it is certainly not cheap. There are also smaller ones created by the same group of researchers as shown in this video below. I guess these are more suited for a fish tank.





So why robot fish? Why not the very machine-looking like mini-submarines? Rory Doyle, a senior research scientist at BMT Group said,

"In using robotic fish we are building on a design created by hundreds of millions of years' worth of evolution which is incredibly energy efficient. This efficiency is something we need to ensure that our pollution detection sensors can navigate in the underwater environment for hours on end."


Personally, I think this technology is great because:
1. As stated, using the fish design is very energy efficient.
2. The robots can navigate autonomous, which doesn't require human interaction.
3. Chemicals dissolved in the water under the surface can be detected.
4. Data can be sent to the data center wirelessly.
5. The fish robots can recharge themselves when needed.
6. The fish form also help them blend in with the environment (and maybe disguise them from people who intentionally pollute our water).

Now if they are capable of the following, it can be even better:
1. Trace the source of the pollution on their own autonomously (maybe through some heuristic path planning algorithms)
2. Take pictures of the pollution source (to help identify/analyze the cause and maybe use them as evidence in a court of law).
3. Somehow obtain energy on their own? Eat seaweed, little fish, or shrimp and generate energy through metabolism?
4. Also, in case of malfunction, is there an easy way to retrieve it? Maybe using another robotic fish?

Every coin has two sides, and there are certainly concerns for this technology too. For example: what if other fish (a shark? although a shark is not technically a fish) attacks the robotic fish and treats it as food? I am sure the robot fish won't be easy to digest and might kill the poor (real) fish. How who's responsible for that? And how about the disappointing fisherman who happen to catch the robotic fish?

You can read more about the robotic fish from the following articles:

Article at BMT web site
News Article at Reuters





Shear will power, no matter how strong it is, will not make a problem go away.

Saturday, January 10, 2009

Joy of Life: Volume 1 Chapter 1

Good stuff should be shared. If you enjoy reading my posts, please share them with your friends too! Also welcome to comment on various topics and present different views.

Volume One: The City by the Sea
-- written by Maoni

Chapter 1: Story Gathering[1]

Port Danzhou was located at the east end of the Qing Empire. Although a city by the sea, due to the fact that the several ports further south had already been developed and the sea route to the western world had also been opened, the empire’s trade center had moved to the south, and the port city gradually showed more and more decline. The once bustling harbor had quieted down over the last few years.
Seagulls flew about the city freely and no longer had to worry about harassment from those loathsome sailors.
The original residents of the city, however, didn’t feel too big of a change in their everyday lives. It was true that their income had declined, but the emperor, His Majesty, had exempted the city from taxes several years ago. Therefore, life wasn’t bad. Besides, this seaport city was so beautiful. Now that it had become quiet once again, it naturally became even more livable. That was why occasionally some influential people would choose to build their manors here.
Because it was too far from the Capital City, very few officials stayed. The only one that somewhat qualified had to be the Old Madam living in the west part of the town.
It was said that the Old Madam was the mother of the Count of Southernland in the Capital City, and chose to live out her life in retirement in the city by the sea. The residents of Port Danzhou all knew that the Count of Southernland was in great favor with His Majesty. That was the reason why he wasn’t assigned a position outside of the Capital City like usual, and instead, stayed in the Capital City and got a job in the Ministry of Finance. That was why everyone showed plenty of manners and respect to the residents of the manor.
But kids are ignorant of such things.
It was a fine and warm day. Adults mostly found themselves sitting inside a wine-house, enjoying the salty humidity blown in by the sea winds, together with the preserved plums and wine in their cups.
In the western part of the town, just outside the backdoor of Count of Southernland’s manor, a group of teenagers surrounded the stone steps. Shoulder to shoulder, they filled the entire opening. What could they be doing there?
If one got closer, he would find a very interesting scene. It turned out that all these lads were listening to a four or five-year old kid speaking. The little boy had a pretty face. His well trimmed eyebrows and his crystal-clear eyes looked as though they came out of a beautiful painting. Although his young voice was that of a little kid, the tone of his words clearly showed the arrogance of a senior. With a sigh, he gestured with his little arms.
“Truman walked up to the wall, and saw a staircase, so he walked up the staircase step after step. Then he found the door. Pushing the door open, he walked out…”
“What happened then?”
“Then? Then…he naturally went back to the real world.” The little boy pouted, as though he was quite weary of such a retarded question coming out of someone much older than him.
“It can’t be! Shouldn’t he go seek out that Hanny…something…?”
“Harris,” another teen picked it up.
“Right! Shouldn’t Truman find that Harris guy and kick his butt to vent his anger? He was locked in for so many years.”
“Nope.” The little boy shrugged.
“That was no fun! Young Master Fan Xian, today’s story is certainly not as good as the ones in the last few days.”
“Which ones did you like?”
“A Dreamy Journey to the Far, Far Away[1]!”
“The Story of a Charming Wanderer[2]!"
“Bah!” the little boy named Fan Xian stuck his middle finger out at the older kids around him. “Fighting, killing everywhere is bad for your health! Digging everywhere for treasure is bad for the environment!”
A loud, angry voice arose from inside the manor all of the sudden, “Young Master, where did you go again?”
The older kids surrounding him also stuck their middle fingers out in imitation and shouted in unison, “Bah!” Because of the large number of people involved, it was a much grander effect. Grinning triumphantly, they quickly scattered and disappeared into the nearby alleys.
The little boy stood up from the stone steps and swiftly whisked the dust off the underside of his pants. Then he turned and dashed into the courtyard. Before he closed the door, he shot a quick glance toward the young, blind shopkeeper in the small grocery store across the street with his shrewd eyes, while a complex emotion, something completely inappropriate for his age, flashed across his face, before he tenderly shut the door tight.


This was the fourth year after Fan Shen first came to this world. Throughout these years, he finally realized it was not a dream. He had indeed arrived at an unknown world. This world appeared to be identical to the world in his memory, but seemed to also have many differences.
By eavesdropping on conversations among servants in the Count’s Manor, he eventually figured out his status: he was the baseborn son of the Count of Southernland. Just like any TV series about powerful and rich families, the status of a baseborn son made him an easy target for all kinds of evil schemes from people such as First-Aunt[3], Second-Aunt[4], and others. He happened to be the only son of his “supposed” father. For the sake of extending the Count’s bloodline, he was sent to the city Port Danzhou far away from the Capital City.
After so many years, he gradually grew accustomed to his new identity. Trapping an adult’s soul in the body of a child was certainly a very different experience both biologically and spiritually. If he were any other normal person, he probably would have gone mad – however, conveniently, Fan Shen happened to be a patient with non-functional muscles and lived in a sick bed for many years. His difficulty in mobility during this life was nothing compared to the miserable state in his prior life. As a result he found himself without much discomfort living inside the body of an infant.
The most discomfort was actually his name nowadays. When he had been about one-year old, His Excellency, the Count in the Capital City, sent a letter and bestowed a name upon him: Fan Xian[5], with a middle name Anzhi.
This was not a good name because it sounded just like the cuss word in his original hometown – “Fan Xian”, which meant having nothing better to do.
Since his outside form was just an infant, it was impossible for him to voice his objection.
In his prior life when he had been treated in the hospital, especially in the early days, he could at least still turn his head around. So he frequently begged that lovely little nurse to buy pirated DVDs and books for him.
After living in the Count’s Manor for an extended period, he could tell that the Old Madam only appeared to be cold in manner, but inside, she doted on him very much. The slave girls and servants also didn’t regard him with any special treatment because of his status as a baseborn son. Though, the pain of not being able to communicate with others bothered him very much.
Could he have discussed with the slave girls that he was actually from another world? Could he have told the home teacher that he actually already knew all the characters in the books?
Therefore, he often snuck out of the Count’s Manor from the side door and played with those children of the poor on the streets. His favorite activity was to tell stories to them, stories from movies and novels in his own world.
It was as though he wanted to keep reminding himself about something, that he did not belong to this world. The world he truly belonged to had movies, the Internet, and YY novels[6].
He didn’t know why he told the story of the movie: The Truman Show. The plot of the movie was plain to start with; besides, there was no Jim Carrey to make people laugh. He should have clearly known that these teenagers of Port Danzhou would have no way of liking it.
But he told it anyway, because he always felt that absurdity deep inside his heart. He was on a dying course. What made him reincarnate inside this body? He couldn’t help but to remember that movie…. Maybe, all these people, the streets in front of him, even the seagulls flying in the sky, are just props intentionally arranged by someone, just like in Truman’s world.
Truman eventually discovered the falsity of the world he lived in, so he resolutely took the ship and found the exit.
But Fan Shen, no, he should be called Fan Xian now … knew that he was no Truman, and this world truly existed. It was not just a huge soundstage. Therefore his act of telling stories every day, to remind himself that he was from a different world, was something truly absurd by itself.





[1] A popular Internet novel.
[2] Another popular Internet novel.
[3] First wife of the Count.
[4] Second wife of the Count.
[5] Here “Fan” is the last name, and “Xian” is the first name. In Chinese, “Xian” means leisure, idle and unoccupied. “Anzhi” means relax and be content and came from the verse, “Once here, then relax and be content,” which originally came from the book “The Analects of Confucius.”
[6] YY stands for Yi Ying, which means being as ridiculous and impudent as your mind will let you. The main character in such YY novels would always have the best fortune in the entire universe and eventually have all the fame, money, and power and become epic heroes, kings or gods in this ridiculous and non-logical world the author created. This novel itself could be classified under this genre.



Now support the author Maoni by clicking this link, and support the translator Lanny by following my blog! :)


Video of the Day:

This has nothing to do with robots or translations. But isn't this kid simply amazing?