Earth Augmented

Remember when Google Earth first came out? Many of your friends, classmates and even your parents began to check their computers and phones to get a glimpse of the birds-eye view of landscapes across the globe. Maybe you were one of those people that fully zoomed in on your address or street, hoping to snatch a view of your house.

Can Google Earth be taken to the next level? At Tech, Ph.D. student Kihwan Kim and his advisor, Irfan Essa, have created an advanced form of this technology. Essa, Kim and other researchers have developed a system that creates real-time video from traffic and surveillance cameras. The system also incorporates 3-D modeling of virtual skies, cities and even people. No, their program isn’t Big Brother, but is merely an advancement of the Google Earth technology, only forming human models. Not only does the program show models of people, but it lets one view the weather, traffic and even sporting events in from aerial view in real-time.

Want to watch a football game in real-time across the country? Well, with Kim’s and Essa’s program, this is possible. All you’d have to do would go on the respective Internet map, locate the region you’d like to view, and soon enough you’ll have a birds-eye-view of the game. There will be real-time models of the players, so you won’t miss a single play.

“Our genius professor started to think of this idea [for the project]. We began to wonder how we visualize about these things. We started talking about these ideas as I was a Master’s student and started making this program,” said Kim.

“Kihwan Kim and I were motivated to make visuals like Google Earth alive. We felt that whenever someone showed an overview of a city, it did not look like a city. We wanted to make the fly-through of a city, via aerial maps to be dynamic,” said Professor Essa.

“Our research is very simple. We’re making the future Earth alive. We want to augment, make the object, visualize the object such as traffic on Google Earth,” said Essa.

Kim further explained his research, describing what makes it that much more specialized than previous programs. “Our videos have virtual views. Instead of limited number of views, we can change our viewpoint. It shows you a different perspective based on your location…the video blends with your line of vision. For example, you can change your view in your environment based on your video viewpoint.”

“At present [in the Google Earth program], cities have a feel of ghost-towns. Our work attempts to analyze information from sources like video cams of traffic and sports, register them on aerial views so as to show an alive version of the earth maps. We leverage efforts like Google Earth and Microsoft Virtual Earth. These systems have a large amount of aerial imagery registered to maps. However, the aerial images used by them were captured a while back and are just static images. They do not show what is going on in a city. Our prototype system adds that feature,” Essa said.

While it’s not possible to have a camera lurking on every street corner, the program uses computers and graphics to render the blanks.

“We can also model the sky based on cameras. We calculate and infer what’s happening in between cameras, generating smooth transitions. You can see objects in a symbolized model, such as people and cars,” said Kim. “We are still in a simple model and testing the research. We still don’t have proper cameras and computation powers. And we can still expand the scope of the research.”

Some feel as though Tech’s team of researchers have taken the program too far, claiming it causes an invasion of privacy.

“Invasion of privacy is not true,” Kim said. “We are not recognizing people and showing the footage of them…it’s just a symbolized model. The background is just an image, so you cannot see actual people. People are concerned with the future. Such as, if a company were to industrialize this technology, it could be a huge problem. This is not the case for our project. We’re just replacing actual images with 3D images.”

Essa said, “At present, our work is aimed at not identifying anything or anyone. We do not identify cars or people. We just track the motion and flow of both and abstract them to generic models. As these are real video, someone watching the videos could identify the people, and we are aware such technologies do exist. At present, we are not working on integrating such technologies. We are at present using purely public sources of information, like videos of traffic cams, sports footage and videos of sky movements.”

Over time, this technology may be industrialized even further. “We hope to continue on adding other forms of data to augment earth maps. This could include weather patterns, smog and clouds and also crowds in public spaces,” said Essa.

“In our opinion, if we could continue this project, we can expand the scenarios. We can deal with rivers and seashores, or birds, from videos. But the problem is, now how do we expand these systems on Google Earth? There is still research [to be done], and there is still a gap in the engineering to make it commercialized,” Kim said.

Some may argue that this new technology is merely a tool that can create invasion of privacy. However, the program allows one to stretch past Google Earth’s boundaries, allowing users to be somewhere else in real-time. Essa, Kim and Tech’s other researchers are still developing the program, creating new simulations and scenarios and finding new ways to further real-time video.

Whether or not this work will revolutionize the way students and others view technology, the program is still bulldozing past Google Earth’s limitations and pushing done with this type of multimedia in the future.

Advertising