Today I will talk about what is happening in Barcelona in the world of what we understand as “Cognitive Computing”, the next generation Big Data systems.
Nowadays it becomes feasible for companies to find interesting patterns hidden in data. To do so, predictive models play an important role. Given an events dataset, predictive modeling attempts to set up a model to predict the probability of a specific outcome.
This is achieved with programs that use machine learning algorithms. By Machine Learning we refers to the ability of computer systems to improve their performance by exposure to data without the need to follow explicitly programmed instructions. At its core, machine learning is the process of automatically discovering patterns in data. Once discovered, the pattern can be used to make predictions.
During the last few years along with the explosion of data, substantial new innovations have lead to a boost in the construction and the study of new algorithms that are able to learn from real data to create effective predictive models. Now these algorithms can be improved, or “trained”, by exposing them to large data sets that were previously unavailable. Today a vast amounts of data that can be used for training these algorithms are readily available thanks to advent of Big Data.
The general idea is that instead of instructing a computer what to do, we are going to simply throw data at the problem and tell the computer to figure it out itself. We changed the nature of the problem from one in which we tried to explain to the computer how to drive, to one in which we say, “Here’s a lot of data, figure out how to drive yourself”. The software takes functions from the brain like: inference, prediction, correlation, abstraction, … giving to the systems to possibility to do this by themselves.
We will use the term “Cognitive Computing” (others use Smart Computing, Intelligent Computing, etc.) to label this new type of computing that refers to the continuous development of supercomputing systems enabling the convergence of advanced analytic algorithms and big data technologies driving new insights based on the massive amounts of available data.
Undoubtedly, the new requirements for dealing with uncertainty combined with the recent advances in machine learning and computing innovation, open the possibility to rethink all the previous research performed. For a long time our research group in Barcelona has been conducting cutting edge research to improve Big Data middleware layers. Recently new researchers with strong background in machine learning and artificial intelligence joined to our research group allowing us to focus our research on Cognitive Computing too.
Currently we have 4 research focus to contribute in this area:
- The goal of the first research focus is to enhance neural network algorithms (Deep Learning) using high performance computing platforms to make them more suitable to be used in tasks which involve unstructured types of data and unlabeled datasets.
- The second one explores an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each application, based in Probabilistic Graphical Models and Bayesian inference.
- We are also developing a suite of novel technologies for carrying out real-time data analytics over the streams of digital photos and videos generated by online communities. The goal is take profit of the nature of visual contents, thus allowing inferring knowledge beyond the capabilities of batch-based systems and text-only analytics.
- Finally we are doing research in a real-time scalable indexing of image/video, doing a distributed real-time near replica detection over massive streams of shared photos.
We will share with you all of our future achievement in this new amazing research area. Keep in contact!