Examples and Considerations of Big Data Information Systems harnessing Machine Learning and Artificial Intelligence

A coursework / assignment that I’ve set for university students studying computing is to write a report that discusses the need for Information Systems in the modern world. The theme in question being: Big Data Information Systems harnessing Machine Learning and Artificial Intelligence.

The report should provide some background / context to the topic, explore the various components and architecture of the Information System, the associated Software Life cycle, the Tools / Technologies / Methodologies to hand and on through to Deployment.

The following is a list of some areas / systems they could consider. Do you have any other interesting examples that you could add in the comments section of a Big Data Information System that makes use of ML & AI tools / techniques?

Formula 1 Real-time Telemetry Data AnalysisMRI Image Analysis
NASA James Webb Space TelescopeEarthquake Early Warning Systems
BOINCHuman Genome Project
Search for Extra Terrestrial IntelligenceLive Face Identification System
Amazon AlexaUncovering the Past with LIDAR
Google AssistantSmart City, Real-time Traffic Analysis
Weather ForecastingMedical Diagnosis
Movie Recommendation SystemsSurface and Subsurface Analysis of Hydrocarbon Potential in the Oil & Gas Industry
Music Recommendation SystemsSentiment Analysis in Social Media
Self Driving VehiclesAutomated Warehouse
Crime PredictionSmart IoT Enabled Retail Shop
Smart Patient Monitoring & Analysis with Bio-medical SensorsCERN Large Hadron Collider
Humanoid Robot (Boston Dynamics)Humanoid Robot (Tesla)

Some of topic areas above may benefit from consideration towards Ethics, Data Protection / GDPR, would such elements also be useful / key things that could/should be given all due consideration in a report exploring Big Data Information Systems. Many interesting considerations can be found in the ACM Code of Ethics and Professional Conduct (poster) (booklet). Do add your thoughts in the comments section.

Do you have any interesting thoughts on the realm of High Performance Computing (HPC) in the form of C / Fortran MPI operations vs the world of Cloud Computing Services, Graphical Processing Unit (GPU), Tensor Processing Unit (TPC) or Quantum Computing? Again would be interesting to hear your thoughts in the comments section.  

Would Explainable AI be of particular importance, particularly with areas such as Medical Diagnosis? Once again comment below on your insights with regard to the need for Explainable AI?

Advertisement

Shader Driven Rendering Demos

The following videos are from the work carried out by one of my Honours Project students this past academic year. One can follow this link to read some further detail about it. One may access a playlist of all the videos below on YouTube. The engine consists of over 55,000 lines of code spread across almost 80 classes.

A rewrite of the NVIDIA SDK Island demo using the XPR Python library and the original NVIDIA media resources.

A rewrite of the NVIDIA SDK Ocean demo as an extension of XPR which uses the inbuilt Compute Shader pipeline along with the original NVIDIA media resources.

XPR Python based rewrite of the NVIDIA SDK Terrain demo. Uses the XPR engine’s mesh generators, resource management system, multipass framework and inbuilt tessellation pipeline.

A demo that showcases XPR native instancing and geometry shader driven particles using the textures and animation logic from the Microsoft SDK Particle sample.

An optimised, procedural representation of the Menger Sponge fractal using the XPR engine’s inbuilt pipeline. The demo supports up to 7 recursive subdivisions of the original cube and can animate each resulting cube individually.

A demonstration of hardware instancing as supported by the XPR engine. This sample renders 125.000 boxes using only one model with multiple positions.

A rewrite of an old Microsoft SDK demo using new completely shader-driven methods. Uses XPR multipass and post-processing features to showcase HDR.

An XPR engine demo that demonstrates how to override pipeline bindings in order to create and animate custom fractal shaders. In this case the fractals represented are Mandelbrot and Julia set.

An XPR engine demo that loads a series of meshes that form a car from an FBX file.

Computational Power versus Energy Costs

It is always interesting to see the balance between the cost of electrical energy versus the amount of compute power than can be achieved. For many years we have seen the clock speeds of CPU’s increase rapidly, though this has reduced somewhat in recent times in favour of the multi-core architecture, where we can make use of a number of low power consumption cores to effect the same result, with significant power savings. The higher clock speed we go the greater the demand on electrical power, we are now approaching an impasse where energy costs are now the main driving force behind supercomputer installations. GPU’s have become a very popular high performance computing tool in over the past few years with their move to multicore architectures on the scale of 512 cores and upwards. It is now becoming a question a balance between CPU and GPU computing. We are now living in a world surrounded by low energy consumption mobile devices, many of the processors are moving into the Gigahertz range, and dual / quad-core phones / tables are becoming the norm. Can the computation power of these be somehow harnessed for scientific purposes whilst they are charging. When you think about it all those billions of mobile devices around the world just sitting there using just a fraction of their actual capabilities – is it an untouched computational resource just waiting to be discovered.

tesampu

Warehouse-size supercomputers costing $1 million to $100 million can seem as distant from ordinary laptops and tablets as Greek immortals on Mount Olympus. Yet the next great leap in supercomputing could not only transform U.S. science and innovation, but also put much more computing power in the hands of consumers.

The next generation of "exascale" supercomputers could carry out 1 billion billion calculations per second — 1,000 times better than the most powerful supercomputers today. Such supercomputers could accurately simulate internal combustion engines of cars, jet plane engines and even nuclear fusion reactors for the very first time. They would also enable "SimEarth" models of the planet down to the 1 kilometer scale (compared to 50 or 100 kms today), or simulations of living cells that include the molecular, chemical, genetic and biological levels all at once.
"Pretty much every area of science is driven today by theory, experiment and…

View original post 823 more words