What is Multimedia? A 21st Century Viewpoint

One of the courses / subjects / modules I lecture on focuses on Interactive Multimedia taken by around a hundred or so third year students. Each year the very first slide I present asks the question “What is Multimedia?”. Given that classes for the second semester will commence once again next week I thought of asking this question to all those out their within the blogosphere to see what you regard multimedia to be. The word cloud (created using this generator) below is perhaps one simple example of multimedia in action where by one can provide a list of words resulting in an image being generated based on the frequency of those works. Perhaps if I received a good few comments I could create an updated word cloud that better reflects what “Multimedia” is today!

multimediaWordCloud

The classical definition would be something along the lines that multimedia combines a mixture of content such as text, images, video, audio, animation and interactivity. Has the definition for what multimedia is changed? given that we are now living in a world where technology is ubiquitous.

What about Smart Televisions? Televisions used to be very much a passive from of information transmission focused on the visual and auditory senses. Smart Televisions of today can be controlled by gestures, the media that is accessible is no longer just a broadcast that you must tune into, but can now put the user in control with on demand content and online interactive media. If you want you can even control it using your smartphone along with other things such as the lights and heating throughout your home. Its probably safe to say that we are living in the age of “the App” in that for more or less anything you can think of their is an “App” out their in cyberspace just waiting for you to download.

The video below is of the LG booth at CES 2013, featuring built in cameras for gesture control and microphones to enable voice commands. It will also recommend TV shows and movies based on your viewing habits. Many people have full HD televisions at present 1920 x 1080 (2K), but we have seen in recent times 4K resolution TV’s becoming available to the consumer (though they do have a price tag of 20K+). Higher resolution TV’s – such as 8K are also in existence such as the Sharp 8K TV demoed at CES 2013.

Speaking of television / film what about the world of 3D such as Plano-sterioscopic Imaging or even the IMAX experience. My most recent experience of an IMAX screening was seeing The Hobbit at Cineworld Dublin just a few weeks ago. It was great seeing the film for a second time, but even better seeing it on a much larger sized screen than what you would usually find at a cinema. By far the best IMAX experience in my opinion was that of the BFI IMAX just around the corner from Waterloo station in London due to the spherical shaped screen and SPL of the speaker system.

As you all probably know The Hobbit was recorded using 5K Red Epic Cameras mounted in pairs on a set of rigs (to capture the 3D effect for the left & right eye), allowing one to change the interocular and convergence on the fly during the shoot. The recorded frame rate of 48fps has been the standard in IMAX since the format came out (it still of course falls short of the refresh rate of the human eye). One issue of course is that the sets had to be over saturated for the recorded footage to have the correct colour grading. The footage of course was all captured digitally and written on to 128GB Cards.

Our smartphones now make extensive use of touch screen technology as well as voice and facial recognition. The current year will see our mobiles evolve to having flexible screens, ushering in a new era in mobile and content interaction.

Multimedia of course isn’t just limited to something being on a computer screen, what about the blending of lasers such a light harp and midi technology, see the example below of Jean Michel Jarre playing the Second Rendez-Vous.

Is the term Multimedia used too much? especially as so many of our devices allow us to consume various forms of media and interact with same in a myriad of ways. Not too long ago the use of a number of different forms of media was seen to be something new and novel, now however it seems that whether its our TV’s, mobiles, tablets, computers or even our cars we are consuming and interacting in ways which a few decades ago would have been science fiction. Will we all be connected with devices similar to Google Glasses within a few years?

What about Books? Are books those strange typically rectangular objects made from trees that contain words printed double sided nearing the end of their lifespans. eBook readers are becoming ever more popular. If you take a look at this article dated 14th Jan 2013 you will see that libraries are now starting to throw out their “books” and go for an all digital system. Are the days of carrying a school bag to school stuffed with as many books as you could squeeze in numbered? Schools are even getting rid of their books with some purchasing iPads for every student (such as the Essa Academy with 840 pupils).

Has the future already arrived? In the mid to late 1980’s & 90’s we saw PADD’s being extensively used in both Star Trek the Next Generation and Star Trek Deep Space Nine. Is Multimedia still a term that has meaning in this day and age where it seems that more or less every device we use has a number forms of media and means of interaction.

Setting up a Motion Capture System – Twelve Camera Flex 13

Back in June funding was made available by the University for the purposes of capital asset acquisition. The School of Computing put in a number of bids for equipment ranging from Eye Tracking and Networking to Video Production and Motion Capture. Two years ago when a similar opportunity came around, I suggested the idea of acquiring a Motion Capture System.  It really boils down to a question of cost benefit analysis. At that particular time any reasonable system would have been very expensive, so we ended up purchasing a Render Farm instead as we had quite a few students doing work in 3DS Max and the extra horsepower to quickly render out thousands and thousands of frames of animation seemed like a far more useful resource to have. I spent quite a bit of time during the summer of that year looking over render farm specs along with our Computer Systems Manager, we eventually settled on a 64 Core system.  At that very same time as well, we also purchased a fairly high end 3CCD video camera, about 6500 watts of lighting for the Green Screen Room, and a few other bits and pieces.

With this year’s funding we finally decided to take the plunge and get a Motion Capture System. Quite a bit of work has been done in the past two years regarding 3D modelling and animation. So a motion capture system would greatly add to this, providing us with the ability to readily animate the 3D characters produced by our students.

The time frame for putting the documentation together for the funding bid was quite tight so it transpired that I ended up putting the material together for the Video Production and Motion Capture System whilst I was out-with the country on holidays. Towards the end of August the proposals were signed off and approved, so I spent a fair bit of time putting together a finalised shopping list that should provide the school with some really interesting equipment to work with. Throughout most of the month of September various suppliers were found and the various items put in for purchase, with the last item (a piece of equipment for camera stabilisation) being finally sorted out just a few days ago. Colin our Computer Systems Manager tracked down a company selling the Flex 13 Camera system. It came on the market around April 2012, and has some interesting specs such as 1280 x 1024 resolution running at 120fps. After quite a few emails it was finally decided to go with a 12 Camera system with a Medium & Large MoCap Suit.

On Wednesday 26th Sept the Motion Capture System finally arrived, so I spent the afternoon going through all the parts and checking all was ok. On Thursday evening Eyad (a fellow lecturer) and myself went about setting up the system in our Green Screen Room. We got all the stands setup, cameras mounted, and all the cabling in place. Then we set about installing all the necessary software on one of two new Z400 workstations that we had purchased just a few months previously. The software installation was quite straight forward, but we ran into a problem with the registration of the software license. The error was that it couldn’t find a network. It was around 21:00 in the evening so we decided to leave it for the day and get it sorted out when the Systems team were in the next day.

Friday morning I called in to see Colin and Tommy to see if the software license issue could be sorted out. We first of all began by transplanting the workstation from C5 into the Green Screen Room, it was then necessary to enable some of the network ports in the room so we could get the machine up and running on the computer network. All went well and within a short while we were able to try getting the license sorted out. So with the Motion Capture system now powered up and connected to the Workstation and network connectivity established we tried entering all the license details, but ended up getting the very same error as encountered by Eyad and myself the night before. We were finally able to register online through a web browser and received the license key via email.  Within minutes of saving the license key to disk, the Arena Motion Capture Application was fired up, and the video feed from all the cameras started streaming in. So with the full system now operational we left it at that for the time being.

Yesterday (Saturday) Eyad and myself spent the afternoon at Uni aligning up the cameras correctly, calibrating the system, donning the motion capture suit and carrying out the first test of the system. Calibration consisted of Wanding the area to establish the capture volume, followed by the establishment of the ground plane and then the final phase of Skeleton Calibration i.e. getting the system operational for a specific individual. At the time we left the building (just before closing), we had managed to capture a bit of movement, I tried out the standard Calibration T-Pose followed by some golf swings (even though I don’t play golf).

As of now, the system is working well, though it will be necessary to go through the final Skeleton Calibration for myself, before it is finally configured correctly to accept motion data generated by my movements. It will be great to complete this final stage and see what the system can really do. All in all Eyad and myself have spent in excess of 20 man-hours (excluding the time spent going through documentation / tutorials) getting the system to its current operational state. Overall it was quite simple and straightforward to setup, just needing some time and patience. A large portion of the video production / graphics equipment on order is still in the process of being delivered, so hopefully it will all be in place within the next week or so. Sounds like an exciting semester ahead between now and Christmas.

The following videos above and below should provide an overview of some of the steps involved in the process. Quite a few more can be accessed from the following playlist showing most of the steps involved. Enjoy.

Squirrel on the 2nd Floor Trying to get into Uni One Day Before Induction

Saw this squirrel yesterday running along the 2nd floor windowsill of one of the University Buildings. The squirrel seems really eager to begin some higher education study (scratching at the window trying to get in), especially as this is the area that houses our PhD students. I guess I should not be surprised as it is Freshers week after-all.