Examples and Considerations of Big Data Information Systems harnessing Machine Learning and Artificial Intelligence

A coursework / assignment that I’ve set for university students studying computing is to write a report that discusses the need for Information Systems in the modern world. The theme in question being: Big Data Information Systems harnessing Machine Learning and Artificial Intelligence.

The report should provide some background / context to the topic, explore the various components and architecture of the Information System, the associated Software Life cycle, the Tools / Technologies / Methodologies to hand and on through to Deployment.

The following is a list of some areas / systems they could consider. Do you have any other interesting examples that you could add in the comments section of a Big Data Information System that makes use of ML & AI tools / techniques?

Formula 1 Real-time Telemetry Data AnalysisMRI Image Analysis
NASA James Webb Space TelescopeEarthquake Early Warning Systems
BOINCHuman Genome Project
Search for Extra Terrestrial IntelligenceLive Face Identification System
Amazon AlexaUncovering the Past with LIDAR
Google AssistantSmart City, Real-time Traffic Analysis
Weather ForecastingMedical Diagnosis
Movie Recommendation SystemsSurface and Subsurface Analysis of Hydrocarbon Potential in the Oil & Gas Industry
Music Recommendation SystemsSentiment Analysis in Social Media
Self Driving VehiclesAutomated Warehouse
Crime PredictionSmart IoT Enabled Retail Shop
Smart Patient Monitoring & Analysis with Bio-medical SensorsCERN Large Hadron Collider
Humanoid Robot (Boston Dynamics)Humanoid Robot (Tesla)

Some of topic areas above may benefit from consideration towards Ethics, Data Protection / GDPR, would such elements also be useful / key things that could/should be given all due consideration in a report exploring Big Data Information Systems. Many interesting considerations can be found in the ACM Code of Ethics and Professional Conduct (poster) (booklet). Do add your thoughts in the comments section.

Do you have any interesting thoughts on the realm of High Performance Computing (HPC) in the form of C / Fortran MPI operations vs the world of Cloud Computing Services, Graphical Processing Unit (GPU), Tensor Processing Unit (TPC) or Quantum Computing? Again would be interesting to hear your thoughts in the comments section.  

Would Explainable AI be of particular importance, particularly with areas such as Medical Diagnosis? Once again comment below on your insights with regard to the need for Explainable AI?


Data Structures – Working with Queues and Games

Queues are really quite a fundamental data structure that all in computing should know. Probably one of the most popular real world examples of a queue in operation is that of a checkout counter in a shop. This is a classical example of First In First Out in operation. The first person to join the queue will be the first to be served at the checkout. All further customers join the back of the queue. Hence is a nice example of enqueue() and dequeue() in operation.

The “Hot Potato” Queue Simulation
Another example that is often used is the “Hot Potato” (online), whereby a person has a “Hot Potato” that gets passed around by “people” in the queue. At each iteration a “person” from will become removed from the queue and placed at the back – therein enqueue(dequeue()). This will occur a certain number of times before the “person” with the “Hot Potato” will be finally removed from the queue. This process continues until just one person remains.

Print Queue
An example more tangibly related to computing is perhaps the “Print Queue”. When a new document is sent for printing it is placed at the end of the queue enqueue(). The process of actually printing off a document will remove it from the queue dequeue(). Under many circumstances this will removed the first print job. However many multi-function printers/photocopiers of today, will present a list of print jobs one screen once you log on, allowing you to select which job or jobs you wish to print. So the example of the “Print Queue” isn’t perhaps the best any more, although even with these multi-function one can often “Select All, Print & Delete” which will print off each job in the “Print Queue” following the standard FIFO ordering.

So what might be a good example of a queue system in operation that would be applicable to students studying Game Development? One nice example is perhaps that of Waypoints, the following (online) link is to an animation that moves an onscreen object towards a location the user has clicked (Waypoint), as the user clicks other locations, these are added to the “Waypoint” queue. When the onscreen object reaches a Waypoint location it is dequeued.

Message Queue
Another nice example is of how to implement a “Message Queue” in a game (online). In this case game avatars can pass messages to one another following a distinct packet/envelope structure, that of: sender, destination, type, data. Therefore instead of having Objects communicating in what could be almost considered as a fully interconnected mesh of messages (just consider what a system sequence diagram for this would be like). All Object / Avatar instances instead communicate with each other through a single queue based messaging system.

Production / Build Queues in Games
One example I considered quite applicable to Game Development students was that of the Production Queue. Many stratagy based games make use of “Production Queues” or “Build Queues” to creating anything from Tanks, and Ships to Aircraft and Experimental Weapons such as the AC1000 from Supreme Commander 2 (online) developed by Gas Powered Games.

The Noah Unit Cannon Experimental (online), is a fixed emplacement that can queue up the production of several types of land units, such as the Rock Head Tank, Titan Assault Bot and the Sharp Shooter Mobile Anti Missile Defense to name but a few.

Simcity Buildit
Simcity Buildit makes very extensive use of Production / Build Queues through the form of Factories that produce basic materials such as Metal, Wood, Plastic, Glass and Electrical Components. These items can take anything from 1 minute for Metal up to 7 hours for Electrical Components to be produced. When fully upgraded these factories have a production queue of 5 units. Materials produced in the Factories can then go on to be used in one or more of the nine Commercial Buildings. The Commercial Buildings take the form of Farmer’s Market, Furniture Store, Hardware Store and Donut Shop to give just a few examples. Details of all the items these factories and buildings can produce can be seen (online) (online). The following videos give some sense of what these building are like.

Given that one can wait almost an entire day to process a full production queue of Beef (11 units) it is very useful to have the opportunity of speeding up the process with “Speedup Tokens” in the form of: Turtle x2, Llama x4 and Cheetah x12. To create “Speedup Tokens” one must either earn them through the “Contest of Mayors” or create them from small pieces by creating “Epic Projects”. These are building that can create a fragment of a “Speedup Token” every 24 hours. The more “Epic Buildings” one has the more fragments are needed to create a “Speedup Token”, however as a starting point one needs 3 fragments for Turtle, 6 for Llama and 9 for Cheetah.

Perhaps tasking Games Development students with creating Production Queues that simulate the creation of Beef, Televisions, Popcorn or Pizza as is the case with Simcity, or Land, Air and Naval units in the case of Supreme Commander is a good way of demonstrating the use and need for queues – particularly in strategy games. Another interesting reason for focusing on Production / Build queues is that especially in the case of Simcity Buildit, many of the items produced are dependent on other items. Therefore quite long chains of production can be formed just to produce the necessary resources to create one “Expensive / Complex” final item.

Parallel Processing / Super-computing
Simcity Buildit provides a really good example of the costs associated with production / processing and relate very well to issues around Parallel Processing / High Performance Computing (HPC) / Super-computing and how jobs can impact one another in the determination of the overall execution time. The classical example of this is the process of making breakfast – many tasks can be done in parallel, though one will be constrained by the cost of the operation that takes the longest. Taking a parallel approach to making “breakfast” can however yield a good deal of cost / time savings over a step by step approach (online).

What other Games use Production Queues?
Do you know of any other games that make heavy use of “Production Queues” / “Build Queues”?

What Qualities Make a Game Popular?

I recently made a post about Dare to be Digital 2016 (online) highlighting a number of video pitches. What qualities make a game addictive & popular? To what degree does the gameplay, sound effects, music, graphics, playability, level of difficulty have a bearing on the overall popularity and addictiveness of a game. The following are just a few popular games.

Cooking Fever
Cooking Fever is all about cooking – anything from burgers and hot-dogs, to pizza, suchi and baking. One can upgrade kitchen appliances to make cooking faster, upgrade the restaurant to increase customer wait times and how generous they are with tips. The main interaction style consists of simple drag and drop. When a customer arrives at your kitchen, the ask via a graphic bubble representation of the items they are looking for, e.g. an icon representing a burger. You then need to get some burger buns set out on your work surface and start dragging the ingredients on to complete the burger. One of course needs the cooked burger meat, along with various combinations of extras such as lettuce, tomato, ketchup – dragging each component on to the burger bun. Once the burger is fully assembled one then drags the burger to the waiting customer. If you manage to do this quickly enough they will drop coins representative of the price of the burger along with a tip for good service on the counter and leave as happy customers. As the levels progress the number of customers and combinations of food items increase. At the time of writing the number of installs was between 10 & 50 Million, the game is available from the google play store (online).

Angry Birds
Angry Birds has become an extremely popular game with 100 to 500 Million installs of the app via the Google Play Store (online). What are the qualities of launching a bird at various targets to get them to topple over that makes the game so popular.

Candy Crush Jelly Saga
Having 10 to 50 Million downloads Candy Crush Jelly Saga is certainly another popular game (online).

Clash of Clans
Available from the Google Play store (online) has between 100 & 500 Million installs.

SimCity Buildit
With 10 to 50 Million Google Play installs SimCity Buildit is certainly another popular game (online). This game is all about creating a city and populating it with residences, so you can earn Simoleons through the construction of residences and earn tax from same as well. One can also earn cash through the Trade Depot whereby you can sell goods that you create. One can sell all manner of goods from the basic raw materials created by factories, to more complex items created by taking the raw materials and forming them into a new product such as: Doughnuts, Shoes, Watches, Nails, Vegetables, Tables & Chairs to name but a few.

As you level up through the game further opportunities for Trading become enabled such as the Port for shipping your goods overseas, or the Airport (available once you have a population of 120,000). The Airport allows you to gain special items allowing you to build new types of residences with higher population capacity, namely, Paris, London and Toyko zones. To keep the population happy one must supply them with basic services, provide them with places to relax (parks), and a whole host of other facilities from Schools and Universities, to Entertainment and Gambling.

Over time one can create a city of some 4 million inhabitants and stretch out the area of the city to encompass both the beach and mountains. These areas allow one to build special buildings that can greatly boost the population within a certain catchment area.

At the Vu Tower (available at a population of 90,000) one can unleash a number of different disasters, that allows you to gain valuable Golden Keys. The most basic disaster one can unleash is the Meteor Strike, followed by Earthquake, Alien Invasion and several more.

What makes a good game?
How important is the embedding of Social Media in a game?
Do you make use of in-app purchases, to buy credits, upgrade systems etc?
Is it easy to lose track of the amount you spend on in-app purchases?
How important is the time it takes to complete a level – especially for Mobile Games?
What are your favorite games and Why?
How often do you play games on your Mobile?

How to Demo DSLR Video Settings Live to a Class

We are now into the second week of teaching & one issue I faced was how to do a live demo to a class on how to configure the various settings on a DSLR camera to get some good quality footage. The solution I came up with was to take a live feed from a video camera & pipe it via HDMI cable into the labs projection system. To achieve this I made use of a 2 way HDMI splitter to interface between the output from the camera and the HDMI input to the projection system – as you will see in one of the images below.

Overall it seemed to work out fairly well, with the live feed captured by the Sony NX5E projected across all the screens in the lab for everybody to see. Following on from the demo I got the class to do some filming.

One video created by a pair of students over the weekend recorded at Balgavies Loch, Angus can be seen directly below. The objective had been to capture some nice shots of a forest / nature scene.

Live Demo of DSLR Video Settings

Live Demo of DSLR Video Settings

Live Demo of DSLR Video Settings

Live Demo of DSLR Video Settings

Live Demo of DSLR Video Settings

Live Demo of DSLR Video Settings

Live Demo of DSLR Video Settings

iPad’s Galore – Codea iPad Programming Workshop

Today I gave a workshop on Codea Programming for the iPad at the Further and Higher Education Advisors Conference 2013 that was held today at the Garthdee Campus of RGU. The following photos should give some sense of the room setup just prior to the workshop commencing. As you will see from the first few images, I just had to put an array of iPad’s together to see what they were like.

Capturing an Instant in Time – Students Making a Splash

In the previous week with my class of about 100 first year computing students we looked at the process of panoramic photography and light-painting, hence were making use of long shutter durations. This week I thought it would be interesting to go in the opposite direction and capture moments in time of just hundreds or even thousands of a second. I had seen quite a few videos in the past, about capturing such imagery, using both flash and continuous based lighting, hence I gathered together a number of these videos and made a recent blog post about them.

This first image seen above is a composite of images taken from two of the three setups that were used. Firstly the stage was set for the capturing of soap bubbles gently floating down to earth. Next the students had a chance to drop some fruit and vegetables into an Aquarium. The final option was to pour some water into a wine glass and capture the some of the detail and beauty of flowing water that we overlook on a daily basis.

The day prior to class I set out to purchase some essentials, I began by getting some PVC tape, jugs, straws and glass scraper to clear the water from the aquarium after the splashes. Having being unsuccessful in finding soap bubbles, I ventured to another shop and was delighted to see they had a good variety, hence I purchased one of each set they had, this amounted to perhaps close to two liters of soap bubbles, so I was quite sure we would have enough for the photo-shoot. The final really large and important task was to find an Aquarium, so I headed in the direction of the beach to seek one out. I looked through quite a few different aquariums, and finally settled on a glass one of dimensions 24″ x 15″ x 12″ capable of holding about 65 liters of water. So with that I carted all this stuff back to the office.

Later that night I paid a visit to yet another shop to pick up some fruit and vegetables that should make a good splash in the aquarium. Also picked up some food colouring both yellow and blue for use with the water pouring into the wine glass setup. To capture any spillage from the glass I also picked up a paint tray! So that was more or less everything.

Given the class size was about 100 a video feed was setup between the green screen room in which the photography was taking place and the computer lab where all the students were working on some Photoshop and Illustrator tasks. That way they could see what was going on as the video feed was displayed on three projectors within the lab, hence they could move between one and the other depending on how busy the photographic session was. I have used this technique in the past on a few occasions and have found it to be very useful. All in all it took about an hour to set everything up for the shoot with the help of three students and the support team for the video feed.

The following photographs should give a sense of what was taking place in the green screen room with all three stages running concurrently. As you will see bubbles were being blown, peppers, strawberries and the like were being dropped into the aquarium & water was being poured into the wine glass. Five lights were used, two for both the aquarium and the bubbles, leaving just one for the wine glass. In total this amounted to the equivalent of 5320 watts of lighting keyed at a temperature of 5200 degrees kelvin.

The following set of images give a sense of what the room was like after a bit of tidying up was done and some things moved around a bit.

The next set of images just give a sense of the materials that were used for the photo-shoot taken around two hours prior to setting up the scenes. The large Nemo sitting happily on the aquarium will probably find a new home back on my desk but inside the aquarium, in which he fits nice and snugly. This of course has the added advantage of keeping him dust free once I find a suitable cover.

The final set of images include some of the water being poured into the wine glass along with the individual shots that were combined together in the first image of the post.

All in all I would guess that a few thousand photographs were taken in the course of a couple of hours. Its probably also safe to say that the students really seemed to enjoy the photo-shoot, especially given the room was such a hive of activity. Having a number of distinct but related tasks seemed to work well, one thing we didn’t do was to capture water drops falling and splashing into a pool of water, so that is perhaps something for next time, as one can use a few techniques for this alone. If you wish to see some of these images if greater detail, then you can take a look at the corresponding Album.

How to Teach Photography to a Room of 100 Students?

In a recent post I included some videos discussing depth of field and how it can be affected by aperture, focal distance and distance of the object. The question that came to mind however is that of how could I demonstrate elements of photography to a group of about 100 students. Often you may gather a small group of half a dozen crowded around a camera to show them something, however this doesn’t really scale well to a group on the order of 100 or so.

Photography with PixelSense, Canon 600D, a Jib, TV, & Projector

To solve this problem I made use of technology to help them see the live interaction I performed with the settings on the camera itself and remotely using Canon’s EOS Utility. The room in which the students were, contained three projectors, one more or less in the middle of the room with the others at either end. To allow them to see the interaction I made use of the EOS Utility in conjunction with Microsoft’s PixelSense (Samsung SUR40) providing a table top interactive surface with which to interact with the settings of the Canon 600D. In front of the camera I placed two tables covered with some green cloth and a number of objects at different distances to focus on. You will also notice from the images below I also included a tape measure running down the length of the table.

Photography with PixelSense, Canon 600D, a Jib, TV, & Projector

Located next to the Camera and the PixelSense table I added a HD TV so I could readily see the interactions I was performing. Floating a few feet over the PixelSense SUR40 hung a Sony NX5E video camera suspended in space in a under-slung position with the help of a Libec Swift Jib 50 Kit (comprising the arm, T102B tripod and DL08 dolly).

Photography with PixelSense, Canon 600D, a Jib, TV, & Projector

The HDMI video feed from the camera was fed to a splitter box with one input and two outputs. As you can guess one of the HDMI outputs fed directly into the HD TV, the other via the use of a HDMI to VGA adapter went off to feed the three projector screens.

Photography with PixelSense, Canon 600D, a Jib, TV, & Projector

All in all I was quite pleased with the overall result especially as all the students could see what I was doing first hand, moreover there was no need to repeat the processes a dozen times or more to a set of small groups all crowded around the camera. After I demoed the variables affecting the depth of field I let the students to come up and have a go with altering the settings such as f-stop and focal length themselves. They all really seemed to enjoy interacting with the Camera through the use of the surface and whats more all the other students could see what they were doing as well. They also had a good bit of fun just playing with the controls of the Jib and operating the REMO30 pan/tilt head. Concurrently after I had demoed the use of the system I got them to do some multiplicity photographs in our green screen room. The others who were waiting of course to get their chance to interact with this equipment and take some photographs were busy working their way through some photoshop tutorials. So that kept them busy with three distinctive tasks to carry out.

Photography with PixelSense, Canon 600D, a Jib, TV, & Projector

Once they all knuckled down to work, a few 3rd year students dropped by the lab to give me a hand in moving our OptiTrack Flex 13 motion capture system to another room, thereby freeing up our green screen room purely for photographic and video effects work. All it all it was a busy morning, with lots of equipment being moved around. Fortunately I had moved all the equipment you see in the images below into place the night before. You will notice that a shadow is cast by the Sony NX5E video camera and the REMO30 tilt/pan head. I am sure with a bit of shuffling of elements around this can be eliminated for the next time. In the final photograph of the set below, you can see the setup with the projection being displayed on two of the three screens, though the far off screen is quite a distance down the lab. I had hoped to record some video of the system in use, but didn’t get around to it due to the rehousing of the motion capture system, so may give it a go the next time with the elements rearranged is a slightly better manner. I guess the question for the next class is what will I demonstrate next? Some panoramic photography with the use of a Manfrotto QTVR 303Kit was something I had considered as a possibility.

Photography with PixelSense, Canon 600D, a Jib, TV, & Projector

Photography with PixelSense, Canon 600D, a Jib, TV, & Projector

Photography with PixelSense, Canon 600D, a Jib, TV, & Projector

Photography with PixelSense, Canon 600D, a Jib, TV, & Projector

Photography with PixelSense, Canon 600D, a Jib, TV, & Projector

Photography with PixelSense, Canon 600D, a Jib, TV, & Projector

What is Multimedia? A 21st Century Viewpoint

One of the courses / subjects / modules I lecture on focuses on Interactive Multimedia taken by around a hundred or so third year students. Each year the very first slide I present asks the question “What is Multimedia?”. Given that classes for the second semester will commence once again next week I thought of asking this question to all those out their within the blogosphere to see what you regard multimedia to be. The word cloud (created using this generator) below is perhaps one simple example of multimedia in action where by one can provide a list of words resulting in an image being generated based on the frequency of those works. Perhaps if I received a good few comments I could create an updated word cloud that better reflects what “Multimedia” is today!


The classical definition would be something along the lines that multimedia combines a mixture of content such as text, images, video, audio, animation and interactivity. Has the definition for what multimedia is changed? given that we are now living in a world where technology is ubiquitous.

What about Smart Televisions? Televisions used to be very much a passive from of information transmission focused on the visual and auditory senses. Smart Televisions of today can be controlled by gestures, the media that is accessible is no longer just a broadcast that you must tune into, but can now put the user in control with on demand content and online interactive media. If you want you can even control it using your smartphone along with other things such as the lights and heating throughout your home. Its probably safe to say that we are living in the age of “the App” in that for more or less anything you can think of their is an “App” out their in cyberspace just waiting for you to download.

The video below is of the LG booth at CES 2013, featuring built in cameras for gesture control and microphones to enable voice commands. It will also recommend TV shows and movies based on your viewing habits. Many people have full HD televisions at present 1920 x 1080 (2K), but we have seen in recent times 4K resolution TV’s becoming available to the consumer (though they do have a price tag of 20K+). Higher resolution TV’s – such as 8K are also in existence such as the Sharp 8K TV demoed at CES 2013.

Speaking of television / film what about the world of 3D such as Plano-sterioscopic Imaging or even the IMAX experience. My most recent experience of an IMAX screening was seeing The Hobbit at Cineworld Dublin just a few weeks ago. It was great seeing the film for a second time, but even better seeing it on a much larger sized screen than what you would usually find at a cinema. By far the best IMAX experience in my opinion was that of the BFI IMAX just around the corner from Waterloo station in London due to the spherical shaped screen and SPL of the speaker system.

As you all probably know The Hobbit was recorded using 5K Red Epic Cameras mounted in pairs on a set of rigs (to capture the 3D effect for the left & right eye), allowing one to change the interocular and convergence on the fly during the shoot. The recorded frame rate of 48fps has been the standard in IMAX since the format came out (it still of course falls short of the refresh rate of the human eye). One issue of course is that the sets had to be over saturated for the recorded footage to have the correct colour grading. The footage of course was all captured digitally and written on to 128GB Cards.

Our smartphones now make extensive use of touch screen technology as well as voice and facial recognition. The current year will see our mobiles evolve to having flexible screens, ushering in a new era in mobile and content interaction.

Multimedia of course isn’t just limited to something being on a computer screen, what about the blending of lasers such a light harp and midi technology, see the example below of Jean Michel Jarre playing the Second Rendez-Vous.

Is the term Multimedia used too much? especially as so many of our devices allow us to consume various forms of media and interact with same in a myriad of ways. Not too long ago the use of a number of different forms of media was seen to be something new and novel, now however it seems that whether its our TV’s, mobiles, tablets, computers or even our cars we are consuming and interacting in ways which a few decades ago would have been science fiction. Will we all be connected with devices similar to Google Glasses within a few years?

What about Books? Are books those strange typically rectangular objects made from trees that contain words printed double sided nearing the end of their lifespans. eBook readers are becoming ever more popular. If you take a look at this article dated 14th Jan 2013 you will see that libraries are now starting to throw out their “books” and go for an all digital system. Are the days of carrying a school bag to school stuffed with as many books as you could squeeze in numbered? Schools are even getting rid of their books with some purchasing iPads for every student (such as the Essa Academy with 840 pupils).

Has the future already arrived? In the mid to late 1980’s & 90’s we saw PADD’s being extensively used in both Star Trek the Next Generation and Star Trek Deep Space Nine. Is Multimedia still a term that has meaning in this day and age where it seems that more or less every device we use has a number forms of media and means of interaction.

Let’s use video to reinvent education

Is the traditional lecture dead. When I was in secondary school it was generally accepted that the attention span of students was about 45 minutes or a little more. Hence all classes at Secondary School were 45 minutes. Having been reading a number of articles on the web recently I keep on seeing that the average attention span is now considered to be 10 minutes. Has the rapid integration of technology reduced our ability to concentrate? Students are used to watching TV, watching YouTube, listening to music, sending text messages on their phones more or less all at the same time, has the ubiquitous nature of all these gadgets reduced our ability to focus and concentrate? Given that University lecture are typically scheduled for an hour or even two, should we reconsider how we approach an audience of students, especially when one is lecturing to 100+ students?

Should academics be looking towards the Kahn Academy style of teaching by splitting up lecture topics into distinctive 10 minute segments. Should we all become professional video editors and make our lectures available to our students on YouTube? Has the traditional lecture that has lasted hundreds of years become redundant in the matter of a decade or so? How will we be interacting with our students in 2020, 2030 or even 2050.

There is an interesting point highlighted in the TED talk. What if a student understands 95% or 50% of the material, at the next lecture one builds on the knowledge of the previous. Very quickly a students knowledge of the subject resembles swiss cheese, with distinctive holes dotted throughout. Is the future of learning to facilitate self paced learning with the lecturer to provide additional clarification on the topics being taught? It will be interesting to see how tools such as YouTube and social media will change the way in which students are taught from national school right the way through to university over the next 5, 10 or 15 years.


Salman Khan talks about how and why he created the remarkable Khan Academy, a carefully structured series of educational videos offering complete curricula in math and, now, other subjects. He shows the power of interactive exercises, and calls for teachers to consider flipping the traditional classroom script — give students video lectures to watch at home, and do “homework” in the classroom with the teacher available to help.

View original post