Monday, February 5, 2018

Big Data Analytics: A long way to walk in the world of health informatics


Image courtesy of
Markus Spiske freeforcommercialuse.net at Pexels.com
Technology experts say that one of the great technological paradigms that will give a 180 ° turn to all aspects of human life is developing in the current era. One of the causes of this great paradigm shift is Big Data. Because, exponentially, all the objects and devices that surround us are becoming smart, meaning, among other things, that they are collecting the information that each of our actions produces and making metrics of it, every day they add a missing piece of the puzzle that constitutes the mystery of our human condition. Of course, the field of health informatics is no stranger to this great change of paradigm, and, certainly, Big Data will be a catalyst of our possibilities in the world of medicine.

Big Data brings a lot of optimism to the scientific community. Thanks to the massive collection and processing of information that artificial intelligence allows, scientists are saving decades (or perhaps centuries) of experimentation. This only means one thing: an incredibly broad spectrum (perhaps, too broad for our current capabilities) of possibilities for discoveries and inventions. And, in optimistic terms, this results in the defeat of impossible walls. More effective operations, safer, and with fewer risks, could be a daily matter. What many believe is that these developments will eradicate diseases that have given us hard battles, such as cancer or the infections caused by antibiotic-resistant bacteria; and, even, some people assure that these scientific advances will take us to a point where we will transform our bodies into something beyond the 'human' concept.

But those are the dreams. There is also reality. The truth is that Big Data technology still has a long way to go before these bridges are built towards the impossible and beyond.

One of the main problems is that there is too much information available, but there is no way to process it and interpret it all to subsequently convert it into useful knowledge to solve practical problems. There are tons and tons of information stored on servers (and much more data being collected every second,) but we would need an immense army of researchers for being able to bring all of that into medical practice. For this reason, research on artificial intelligence is one of the most relevant trends at present: only a super processor (or many of them) could make this dream possible, and that is already a huge challenge itself.

Read also: 4 Trends That Will Likely Hover Around Health IT In 2018, by Sudir Raju

On the other hand, one of the big problems of Big Data is that it usually gathers a lot of unnecessary information. When researchers find - at last - a key question to answer for solving a particular problem (for example, regulating the immune system of patients with amyotrophic lateral sclerosis,) they realized that the machines have collected a great amount of information, but it is completely useless for the issues in question. How to program the machines so that they collect the correct information, before knowing which is the correct information? This problem is not impossible to solve, though, because, like many other scientific troubles, the trial and error technique allows you to eventually adjust the compass in the right direction. The point is that it takes a lot of time.

Big Data is fundamentally defined by five major interrelated variables: Volume, speed, variety, truth, and value. The volume is directly related to the amount of information collected, velocity is the time variable (how much data is collected in how much time,) variety has to do with how complex is the information which is being mined, veracity is the qualitative variable that points to how reliable or not the collected data is, and, finally, the value is related to the correspondence or not of the gathered information with the needs of researchers on a particular problem. The complicated issue here is that in each of these five variables there are conflicts and major obstacles that health informatics has not yet been able to solve.

Image courtesy of Penn State
at Flickr.com
Regarding the volume, there are two basic problems. The first one, as mentioned above, is that sometimes there is a lot of information available, but it is useless to resolve an issue in question (that is, it lacks real value,) or, alternatively, there is an excess of information supply for a little demand, and there is no way to stop the incessant stream of information that crams the servers (which are not infinite.) The second, on the contrary, is related to the problem of having a poor offer for a high demand. The latter means that the velocity at which the information is being collected, processed, and interpreted is not enough to solve a problem; and it is impossible to push things so that the rate increases because everything does not depend on the researchers but on the machines and the information that comes to the machines.

So there is a long journey to walk still, but there is hope.

Recommended: Big Data, Health Informatics, and the Future of Cardiovascular Medicine

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.