Thursday 6 November 2014

Discover the Digital Age Trinity: 3 Things You must Have in the Digital Era


I remember a discussion with someone who asked what course I was studying in school and I said medicine; his response was "that your course is real good 'cos you're not going to have problems looking for job". I was irritated by such response, but mildly, because some elderly citizens I have talked with who had their education in the 50s and 60s when much of the emphasis was on finishing school and having jobs waiting for them had given similar response. But the tides have taken a new turn and we're in the 21st century where the world has witnessed a global economic melt down; jobs are no longer waiting for graduates of colleges and universities; many companies are cutting jobs, retaining only the most skilled workers and hiring contractors instead of full-time staff; and technology has given everybody the capacity to reach out to any other person in any part of the world and also made it possible for one service provider to deliver their service to a large user base in the shortest possible time.

I have read so many articles on what skills everyone must have in this 21st century, irrespective of your field of study or profession (and I guess you have too); and out of the numerous recommendations, I have sieved out three which I think (you may not agree with me) are must-have prerequisites for everyone in this Internet Age.


1. ENTREPRENEURSHIP 101

Life of an entrepreneur
Entrepreneurship. Credit to BuildBiz
Some experts have argued whether entrepreneurship should be taught as a field of study, or introduced across every stratum of the society for anyone interested to pick up at his or her own pace. The arguments will keep going, but I think every human being needs to master the basic rudiments of "transforming ideas into great businesses". Moreover, it is necessary the acquisition of this basic entrepreneurship knowledge start at the earliest possible stage in life to fuel the exponential increase in the desire to work for oneself as one grows up instead of dreaming to work for someone after graduation from high school or university. The teaching of the basic processes involved in turning any great idea into a great business should begin right from secondary/high school; in fact entrepreneurial education should be incorporated into the high school curriculum and made compulsory for every student, with class projects geared towards sharpening their ability to look for opportunities in their environment and generate business ideas to capitalize on such opportunities, scheduled during the holidays. Many experts are beginning to realize how important a step like this at that fresh level may be for the future of economies around the world. Coupled with skills acquisition training, students who cannot afford university education will have not just the necessary skills, but the right theoretical framework for setting up their own small businesses.

Those who proceed to the university should further be exposed to entrepreneurial education at least in their first year where it will be made a compulsory course. And this is where universities in my country, Nigeria, are still lagging behind: most of them are still teaching students outdated stuffs oblivious of the realities of the 21st century economy. However, a few of them are beginning to realize the danger; for instance, my school, the University of Ibadan has introduced entrepreneurial education as one of the general studies, meaning every first year student must take it to be deemed worthy of graduation; the University has also established a centre for entrepreneurship and innovation to stimulate the zeal of "working for oneself" in students.

Entrepreneurial  education
Entrepreneur and Employees
If we can have what I will call the "Entrepreneurial Revolution" by young people today where we demand the government to give more attention to providing an enabling environment for us to create jobs after graduation, the global economy may be on its path to saying goodbye to recession. And I recommend economists around the world to compare the cost and short and long term benefits of government providing jobs for students after graduation with the cost and short and long term benefits of the same government providing an enabling environment for the same students to create jobs themselves. I believe the balance will tilt to the later; and more effort should be put into realizing the later while not neglecting the former.

While efforts are being made by the Nigerian government to foster entrepreneurship through its YouWin programme, the business incubator initiative from its Ministry of Communication Technology and other initiatives in other sectors like agriculture, more should be done especially by engaging the private sector (though a few startup accelerator programs from private corporations in Lagos are doing a lot in taking up fledgling businesses) to expand the the options for anyone with entrepreneurship drive. One way is to use entertainment. Music talent reality show has become one of the most watched TV programs in Nigeria, bringing unknown music talents to stardom and creating a platform for others to launch their music career. I recently started watching the American reality TV show Shark Tank in which owners of young businesses come before top investors to pitch their businesses to secure funding and in exchange give a certain percent stake of their company to the interested investor. While a few secure funding, startups who do not still gain from the wide publicity the show gives to their businesses due to the show's large viewership base. The private sector in Africa can buy the licence from the Shark Tank creators to produce a similar reality show in Africa, just like Big Brother Africa, where startups from across Africa can come to pitch their businesses before top African businessmen and women for funding. A move like this will bring to stardom unknown African startups and also launch others to a very large audience.


2. RELEVANT ONLINE PRESENCE.

Building your online presence
Relevant Online Presence. Image
credit to Very Official Blog
The advent of Facebook and other social networks has removed every excuse for not knowing anything about the internet. While spending an unnecessarily long period of time uploading photos on Facebook and Instagram, and tweeting about every single celebrity gossip may be productively unhealthy, following pages that deal in your area of study, profession and productive hobbies will give you access to unlimited pieces of premium information on these areas at no cost (information that one would literally pay thousands of dollars, in some cases, to attend as seminars). In addition, there are hundreds of sites and YouTube channels offering, for free, courses on almost any discipline one can think of, meaning distance or money is no longer a barrier to acquiring knowledge in any area of one's interest if there is internet access. Examples of these free online schools include Khan Academy; Edx run by Harvard University, MIT and other top universities in the US; and many others. And the good thing is, unlike the conventional school, you get to learn at your own pace; I have registered on Khan Academy and Edx for a few courses which I'm taking at my own time and rate. Hence, a lot more can be gotten out of the internet aside checking Facebook updates and visiting celebrity gossip sites.

Being relevant online also includes making critical and insightful comments on any page, blog and websites one is subscribed to; some experts will also include having your own blog or website where you share with the world your areas of interest--people have got jobs from unexpected places because of articles they wrote on their Facebook pages, blogs and guest websites; and those who run blogs have witnessed increased traffic to their sites because of their great contributions to other online forums. 

In addition, relevant online presence sets the initial platform for launching any business in the face of little or no cash for online advertisement because of the great online communities which one has impacted with their contributions; example is the Sleeping Baby company that secured funding from one of the investors on Shark Tank: the couple started the company with just $700 and spent no dime on advert, but because the wife belongs to an online community of moms where she has been making great and relevant contributions, this community helped to spread her company such that the company's Facebook page gathered 19,000 likes without paying a cent to Facebook (you'll agree with me how hard it is to get even a thousand likes on a page without paying Facebook).

The issue of affordable internet access is the only barrier in some parts of the developing world to tapping into the abundant free resources that the digital age holds. While governments in these parts of the world make efforts to attract investments in telecommunication infrastructure, the big internet giants of this world--Facebook and Google and so on--should hasten their efforts on bringing internet access to the the world's two-third with little or no access, through their Internet-for-all drone WiFi and Google Loon projects.


3. CODINGUISTICS 101

programming language
Programming Language. Image credit
to Miami Dade College
Most people today speak at least two languages, with English, French and Chinese (Mandarin) being the most spoken; and most of these speakers learned them not for academic purposes but for every other purpose-to expand their network in foreign territories. But it's mildly unfortunate that most people (including me; but I have just enrolled for a tutorial on it online at Khan Academy, where I'll be learning at my own pace) don't know the most popular language: when I say the most popular I mean a single language that is spoken in every corner of the globe by humans through computers. The most popular language on earth is the computer language; it is the oxygen that sustains every cell and tissue of the Internet anatomy: without it there will only be dead computers. And irrespective of the geographical region or cultural differences, the computer language is the same.

While most people will not become professional programmers and developers, I believe everyone should master the basic elements of this language, hence the subheading Codinguistics 101. This is because as patents on inventions and related designs expire after a period of time, some aspects of digital knowledge and information marketing (where people make money by teaching others basic things about information technology) have started expiring, meaning that anyone should be able to perform certain IT tasks without spending a dime (it still amazes me that some people pay others to do basic things like creating email accounts, blogger accounts and installing purchased applications on PCs for them at this stage of the Internet Age). Everyone should know how to do these basic stuffs; and this can only be possible if coding is introduced to everyone at a very young age; code writing (relevant in today's world) can be included in the primary and secondary school curricula and made enticing, not compulsory, to every pupil and student. The private sector can come in here (internet giants like Google, Facebook and so on are already doing so much in this area) by creating summer coding camps for kids, teenagers and young adults; and also create TV shows on code writing starring kids and teenagers to further flame the desire to learn coding in everyone.

Virtual classroom
Khan Academy. Credit to Khan Academy
On a personal quest to learn how to code (and any other subject matter of your interest, from science to philosophy), there are countless hubs online to do just that at your own time, pace and at no cost except making a connection between a browser on your gadget and the server where they are residing; examples include Khan Academy, edx.org, code academy, and many others.

We're stepping into the age of "internet of things" when virtually everything we use, from home appliances to medical devices, will be connected to the internet; and knowing how to loosen and tighten the elementary nuts and bolts of information technology, of which basic computer programming is part, will eventually become optionally compulsory.



White collar jobs are fast disappearing; companies are hiring contractors and employing only very skilled workers; and technology has become integral to our everyday life. The Ned Luds may not like it now; but if we didn't give the Industrial Revolution a chance, the world would not have developed so much as we've seen over the last century. We equally need to give the Digital Age Trinity a chance.

And one more thing; Digital Age Trinity sounds like a very good title for a highly immersing 3-D game. Game developers can build a game in which players have to master three characters (entrepreneurship, digital connectivity and a digital language) in order to survive in a digital economy.





Thursday 9 October 2014

How We Can Avoid Social Media Distraction When In A Serious Business


Online distraction while studying.
Image credit to Connections Academy.
Often times while in a lecture (particularly if the lecture is boring) or when I'm about to work on something important (and which may require I stay online to get some resources) or I'm about to read, I have found myself drifting away from these serious businesses towards the coral reefs of social media networks such as Facebook, Twitter and Instagram to check my notifications, who re-tweeted or favorited my tweets, or who liked my pictures. Before I realize what is happening, I'm spending hours on these social reefs, drowning into the colorful distractions and forgetting what I had planned to work on.

I know a lot of people experience this too; and I have read so many pieces of advice and strategies by different people on how to stay focused and keep away from online distraction when working: strategies such as switch off your phone or its internet access; turn off your email notifications; go to the library without your phone; and so on. But the world has changed in such a way that we now have a digital duplicate of our daily life: the Internet is an inevitable part of our lives. However, we should not allow this technology ruin us in the form of preventing us from concentrating on daily activities that are key to our growth and development and that of the society in which we live. This resolve requires we look for smart ways to stay focused on our work while online.

Kudoso hardware router preinstalled with the software
The Kudoso router. Image credit to
Kudoso
And one of the smart ways I came across is the strategy designed by Rob Irizarry, a technology expert. Seeing how technology--too much time on TV and on the internet--has taken over his children's lives, with a potential of health problems in the future from sedentary life before screens to the internet, he decided to design a system, he called  Kudoso, (software and hardware) that limits their access to internet TV and other sites, including Facebook and Twitter, and awards them time to these sites based on points they accumulate by completing other engaging activities such as home chores, school work, lessons on educational sites like Khan Academy and physical exercise such as running. Hence, kids will not be able to access online TV sites such as Netflix; social media sites like Facebook, Twitter, Instagram and so on; and other entertainment based sites-without having worked for the access points. And these access points have time limits on each of these sites, so that these kids don't spend forever on them. The Kudoso system works as an app that can be installed on home internet routers and also comes as a router preinstalled with the software.



Rob Irizarry, inventor of Kudoso
While this is ingenious, it is aimed at kids mainly. What about the teenagers and adults who spent most of their time outside the house-in school, at the office and alone in their own apartment-with their smartphones always around them. This age bracket is the most productive in the population, faced with so many tasks to accomplish; but could be under-performing because of distraction from social media when at work: in fact, a survey carried out by Salary.com last year showed that 69% of employees in the US spent time on non-work related websites each day in office, with social media sites like Facebook, Twitter, Tumblr and Instagram taking the largest chunks of the total time wasted, and costing these employees' companies hundreds of millions of dollars. Little wonder some business organisations in Nigeria block access to these social media sites because of their impact on workers' productivity at work. But we need other options too--to augment the effort inside and outside the office. And one thing we must note is that whatever options that will emerge must involve our own conscious voluntary effort to help them help us stay focused and undistracted from online distractions while working.

One more option is to develop a mobile application. But hey wait; this is an idea (others might have thought about it too) I'm throwing to app developers and the likes out there (I'm yet to learn how to code; but learning how to code I have promised myself: it may not be now, but I must surely learn how to code). So let's go back to the application. What if there is a mobile application that can block access to all social media sites and apps and uses an algorithm to block access to other entertainment based sites (unless you are working on something entertainment-related). The app will have a Work Mode and a Leisure Mode. For instance, if you are about to work on a project, you open the app on your smartphone, or computer (a desktop version could be made too) and put it in the Work Mode. Once in this Mode, you can choose the minimum period you intend to work, or you can leave it on unlimited period (it will give you the option of easily switching to the Leisure Mode after a minimum period of time). If you're going for a lecture or to work, the app will use your phone's GPS navigation to pop up reminder that you're heading for the location of your work (it will have a feature that enables input of workplace, lecture venues and so on via map and GPS) and should switch to the Work Mode to avoid distraction, such that once the lecturer comes into the lecture theatre or you hit the office and start work, you can choose to switch to the Work Mode. You can also choose to synchronize the app with your phone's reminder or to-do-list of activities so that it gives you the option of staying undistracted from online nuisances while accomplishing your tasks.

Someone out there is already asking whether I can't switch back to Leisure Mode and float on the stream of social media networks and the likes midway into my work. Like I said earlier, its functionality depends, to a large extent, on our conscious effort to stay away from online nuisances during our work periods. However, the app, which I call UnDistract if I were to develop it, would be designed such that reverting to the Leisure Mode before the minimum period of time set by default, depending on the activity, is spent , will be very tedious, involving answering series of questions, covering science, technology, music, arts and so on, drawn from the internet such that the user may stop midway: and the time spent trying to revert will count as that spent on the actual work because the user has got involved in some form of mental work. The activity-based minimum time frame feature will start working after the user has accomplished so many tasks spending the minimum time which can be manually set on the app, and the application's algorithm has gathered enough data to allocate a minimum time frame for any input activity.

I will keep on saying it--such an application will only be effective if we consciously want to stay undistracted while working: I can as well uninstall it after a few days if it seems to impose restrictions to my undisciplined freedom of deviation when I'm working. But would doing so be for my own good?

Warning: if anyone out there finally develops this app, be sure to give me 5 % of the revenue when it explodes with success, else I will sue you the same way the Winklevoss brothers sued Mark Zuckerberg when Facebook became a household name.

Saturday 27 September 2014

Would You Accept Stem Cell Therapy when other Treatments Fail?

Induced Pluripotent Stem Cell therapy.
Image credit to Nature

I remember asking a resident doctor in the haematology department, during a tutorial in my 3rd year in med school (currently in my 5th year), whether it was possible to revert a fully differentiated cell (like a white blood cell, or a muscle cell) back to a stem cell, a type of cell that makes up the embryo (the earliest form of a baby in the mother's womb). The question was inspired by two things: back in my first year, I came across what is called induced pluripotent stem cells in a biology text because of my interest in genetics and stem cell science, because these stem cell could be generated from any type of cell in the body averting the need to depend on a human embryo ( a lot of ethical opinions against it) for stem cells; and secondly the tutorial was on haematopoiesis (the formation of the different types of blood cells from a type of stem cell in the bone marrow (the equivalent of the sweet stuff you suck when you crack the bone after eating the flesh off a chicken leg).

Induced Pluripotent Stem Cell Potentials. Image credit to Nature
The response of the doctor I reserve the right not to say; but the first reason--inducing an already differentiated cell back to an embryo-like stem cell-- why that question was asked had already been on the minds of scientists years before I came across it in the text because of the immense present and possible future benefits certified success in exploring such possibility holds. And many scientists across the world did begin exploration on this uncharted sea. Progress started emerging in bits from animal studies. But the big bang came from the success recorded using human tissue and cells by Dr. Shinya Yamanaka (he shared the 2012 Nobel Prize for Medicine and won the 2012 Millennium Technology Foundation Prize for this work) at the Kyoto University, Japan. His team was able to induce fibroblast cells (found in connective tissue) and skin cells back to a fully undifferentiated state; they did not stop there: they were also able to stimulate the same induced pluripotent stem cells to differentiate into specialised cells such as muscle cells and nerve cells. This success spread like wild fire across the scientific world; it led to the emergence of, among other things, new ways of working on degenerative disorders, such as Parkinson and Alzheimer diseases, involving the nervous system whose cells do not undergo division, unlike most other cell types in the body, to replace severely damaged or dead parent cells. Scientists were now able to take normal skin or hair cells from patients with these degenerative disorders, revert them back to the stem cell state and then stimulate them to differentiate into healthy nerve cells, enabling them to compare at the molecular level the changes that occurred during the course of the patient's life, up to his or her present age, in the diseased nerve cells with the newly differentiated healthy nerve cells.

The concept of induced pluripotent stem cells removed the need to experiment with human embryos as one can readily induce and form them in the lab from virtually any other cell type in the body. This ease further extended the application of this technique to areas like restoring sight to blindness caused by damage or death of the retinal cells behind the eyes (they are nerve cells in your eyes responsible for sending what you see to the brain for proper interpretation; and blindness can result from their damage or death). While the field of stem cell therapy is still mostly experimental, would anyone advise their grandmother or elderly dad to go for such treatment if they became blind and the eye doctor confirmed the blindness to be due to the degeneration of their retinal cells, and that there were no other treatment options?

RIKEN Centre for Developmental Biology, Japan. Image
to RIKEN
The choice could depend on how much information the eye doctor gives you (and you're legally entitled to every bit of information regarding any treatment modality from your doctor before making your choice of treatment) concerning the benefits and the risks, mostly unknown, of induced pluripotent stem cell therapy. But it seems that a 70-year old woman in Japan is keen to regain her sight after becoming blind from a condition known as macular degeneration (occlusion of the retinal cells by blood vessels, leading to damage to the retinal cells) without minding the possibility of the unknown outcomes that may be more on the negative side. Scientists at the RIKEN Centre for Developmental Biology in Japan, after a consult with Dr. Shinya Yamanaka, used skin cells from the woman to generate embryo-like stem cells after treating them with four genetic factors (details of which I will not bore you with); then, they immersed the induced pluripotent stem cells in the appropriate growth factors to generate retinal cells which they surgically transplanted into the woman's retina at the back of her eyes, following approval from the Japanese ministry of health.

One assurance in this experimental treatment is that the woman's immune system will not reject the transplanted retinal cells as they were made from her skin cells: and this, I believe, will be the mainstay of organ transplant in the future when the field of regenerative medicine will have gone closer to perfection in growing people's tissues and organs from pluripotent stem cells generated from their own body cells (the term 'host versus graft rejection' may find no place in the medical texts of the future). But there are possibilities for unknown negative outcomes in this treatment as well, the most unpalatable for me being the decision of these transplanted retinal cells to turn into a cancerous growth. A less heart-breaking outcome could be the death of the retinal cells and hence their failure to restore the woman's sight: however, science is gaining momentum of control over this possibility, the latest coming from the work of 18-year old Joshua Meier whose award winning research--begun as a class project when he was 14--has identified the DNA deletions in the mitochondria linked to aging and short life span in induced pluripotent stem cells; my guess will be to fully understand the mechanisms of these DNA deletions, and devise ways to avert them, in the process of stimulating induced pluripotent stem cells to differentiate into specialized cells for therapeutic purposes.
Prodigy, John Meier in his lab. Image credit to John Meier

While stem cell therapy with human embryonic stem cells is the approved option in different parts of the world currently, it is facing an ever increasing pressure from ethics experts in various dimensions, some of which are being successful in dissuading potential candidates for stem cell therapy from going for the treatment. But success in this first trial of induced pluripotent stem cell therapy in a human will open a new window of opportunities to the treatment of degenerative disorders, especially when we have learnt virtually all the possible outcomes on the negative side and devised strategies to eliminate them, leaving our patients with degenerative diseases and disorders on the doorstep to regaining a renewed form of their lost life.

Friday 8 August 2014

Ebola virus and the Future of Containing very Highly Infectious Diseases.

The Ebola virus. Image credit to
the BBC
Now Africa is faced with a new threat in the form of the Ebola virus; the death toll is rising in the three African countries-Guinea, Liberia and Sierra Leone-where the outbreaks occcurred this year. The Ebola virus, part of the haemorrhagic fever viruses, is extremely contagious and has a fatality rate of about 90%, meaning that 9 out every 10 people with the infection will likely not survive; though the rate so far has been about 50% and 60%. First reported in 1976 along the Ebola River in Zaire (now the Democratic Republic of the Congo), there was no outbreak between 1980 and 1993; some outbreaks occurred in some years between 1994 and 2012; this year's outbreak is the worst since it was discovered in 1976.

And the dawning of this reality has evoked in me questions about how the world, especially Africa, will position itself to tackle future occurrences (probably not the Ebola virus, as it may be eradicated if we get all the necessary public health measures in place) of new viral diseases that may be far more infectious than Ebola and Lassa viral infections.

A few months back, a case of Lassa fever was reported in the Paediatrics department of our teaching hospital, the University College Hospital, Ibadan; we had what we call Grand Round, a weekly seminar on pressing health issues, where this Lassa fever case was discussed in full details: it was at this seminar that I learnt that the one-use, disposable protective suit won by the health personnel managing a patient with the disease costs about 20,000 naira (about $150) which majority of Nigerian patients, who by the way do not have health insurance, can't afford (as about 3 or 4 of this suit will be required daily by the health workers, who would take shifts, to manage the infected quarantined patient-that's between $450 and $600).


While the best option now in the current case of Ebola virus is to provide excellent public health measures (there is hope as the World Bank has pledged $200 million, in addition to the $100 million dollars the World Health Organization and the three affected African countries jointly committed, to fight the outbreak in the affected African countries, including Nigeria) such as various forms of isolation units in hospitals to manage cases of admitted patients who present with the flu-like symptoms that have been associated with the Ebola virus infection, and isolating and monitoring those who brought the patient to the hospital (the treatment centre should also have the constitutional licence to isolate and monitor the patient's family members who came into contact with him or her after the onset of the symptoms); this outbreak has bared the need to establish and fund a multidisciplinary medical research facility in Africa to, among many other research duties, have a department of Unknown Highly Infectious Diseases. This department will be staffed by African medical research experts in Africa and in the diaspora who will collaborate with renowned medical experts in top research institutions around the world to quickly get samples from patients with suspected infectious, but unknown, disease for analysis of the possible cause and the firm establishment of various transmission modes of such a disease; and also to begin search for potential therapeutic (including a cure) modalities based on the accumulated knowledge from the various experimental studies that would have been carried out on the viruses.

In addition, the question of the ancestry and evolution of that new infectious disease-causing agent must be answered. Though this is a more demanding task, success at it will give the medical world insight into how for instance the Ebola virus and Lassa fever virus evolved (underwent mutations) to acquire their infectivity and virulence (the capacity of the viruses to cause the disease in people after infecting them) if there was a time in the ancestry of the viruses when they were not infectious; or even if they were infectious right from their first generation-how have they adapted and improved on their infectivity and virulence? It will also help in making quicker decisions in terms of the best path to follow in designing a treatment protocol if a virus in the same family, or a new strain of the same virus emerges in the future to cause disease in humans. This is getting more demanding and would mean spending more time with the virus in the lab, right? There's a possibility of a test tube containing blood samples of the virus slipping and spilling on to the researcher handling it; there could be an accidental needle pricking while trying to inject experimental mice or rats with the virus (to study immune system response to the virus for possible vaccine development); and a researcher dare not casually leave the lab to take some snacks, without following long protocols involved in removing his or her protective suit, no matter how hungry he or she may be. Is there a way to totally avoid the possible unforeseen hazard of infection that these researchers face in the lab while maintaining the same quality and quantity of research they will be doing on these very highly Infectious disease-causing viruses? A way that will enable a researcher to easily have lunch during work? I guess the solutions are in the future; but the future, I believe, is already here with us. And this future is where the extra collaborators from the US, Japan and other countries with very advanced robotics technologies will come in.

The da Vinci Surgical System. Image credit to
Robot Surgery.
For over a decade now, robots have been designed and modified to carry out surgery both in the battlefield and in the operating theatre under full control of human surgeons who operate them remotely, giving rise to the concept of the term Robo-Surgeon. The most popular and widely used of these robo-surgery technologies is the da Vinci Surgical System developed by Intuitive Surgical in Sunnyvale, California. This Surgical System comprises of a surgeon's console (a room-like compartment where the human surgeon sits very comfortably, equipped with a high-performance 3-D vision camera and master control like video game pads), a patient operating table with four interactive robotic arms and a collection of surgical instruments called EndoWrist instruments. To carry out a major surgery, the surgeon sits in the console that is separated from the operating theatre in which the patient is lying on the operating table of the Surgical System, and through the high-performance 3-D vision camera system uses the master controls of the console to direct the robotic arms to carry out intricate surgical tasks with very high level of precision, leaving behind very minimal scar. This application of robotics in surgery can be replicated in the experimental studies of very highly infectious agents like the Ebola virus and other future viruses and bacteria.

A prototype of a robot that can be telecontrolled
remotely by a human operator. Image credit to
The Indian Express
The future I imagine here will have the robotic arms replaced by more human-looking robots (something more like a Humanic from the TV science fiction series, Extant), but whose entire functions (movements, vision and decisions in the lab) will be under the total control of the researchers in the consoles outside the high bio-security labs in which these infectious samples are kept. Hence, the researchers will not need to be in these high bio-security labs in person, only their virtual presence, but they will be able to carry out their research works as though they were still in the labs; and moreover these Robo-Scientists, as I would prefer to call them, will be equipped with digital note-recording system to enable the human scientists controlling them to document the protocols involved in the research, any findings and results in the course of the research, and easily share them immediately with other labs around the world doing the same emergency research. This will speed up the development of therapeutic agents as results emerge from the work and are re-confirmed by other labs doing the same work in the shortest possible time. One more advantage: no human will be exposed to the infectious agents, only the Robo-Scientists and who can easily be sterilized. Sounds like science fiction, right? But the future is already here. And as the hundreds of millions of dollars committed to fight the Ebola virus outbreak begin to do its job; as the resolutions of the emergency meeting, in Geneva Switzerland, by the global health experts of the  World Health Organization (click on the link for the resolutions of the meeting) on drafting new measures to tackle the Ebola outbreak, held between Wednesday and Thursday, are made known to the public--I strongly hope the medical and corporate worlds will share in this future I envision and begin to set in motions the wheels that will contain the emergence of very highly infectious diseases, such as Ebola, in the future.

Wednesday 23 July 2014

Scientists Begin to Unlock Some of the Keys to Drug Resistance

World Health Organization meeting on Drug Resistance in Leprosy.
Image credit to National Leprosy Eradication Programme
Some time ago I talked about the threat that drug resistance by disease-causing microorganisms poses to mankind if nothing is done now to tackle it. A few days later, the World Health Organisation echoed the same warning and emphasized the need for urgent action in finding new and potent ways to thwart this potential (and what I call) global terrorist attack by these disease-causing microorganisms as they continue to challenge our God-given right to replenish, conquer and dominate the world (for the animal activists out there, don't misunderstand me: I'm not talking about total annihilation of all microorganisms because there are the good guys among them who are minding their own business--the normal flora of our environment--and who are not challenging our God-given rights).

One of the disease-microorganisms that has developed what I call smart resistance to drugs which previously dealt with it is the tuberculosis-causing organism called Mycobacterium tuberculosis. This microorganism has evolved into to variants now known as Multi-Drug Resistant (MDR) and Extensive Drug Resistant TB that is unaffected by most of the first-line and second-line anti-TB drugs, requiring combination of anti-TB drugs from more than one class before the patient's condition can see any improvement. This type of treatment, to be effective, may take up to one year or more, meaning more cost and more side effects of these drugs to the patient (and the patient will have to pay for other drugs needed to counter some of the side effects): this places a big burden on patients in parts of the world where TB is more likely to flourish: the poor populace of the world where access to health care is very limited. In addition to this problem, a case of a variant of a particular disease-causing bacterium resistant to all known potent antibiotics has been documented.

Crystal structure of the LptDE complex.
Image credit to Nature.
But rights (our God-given rights), I believe, come with the necessary provisions and weapons to defend and protect them. According to a research published in the journal Nature, scientists have unraveled the structure and mechanism with which a group of drug-resistant bacteria, termed gram-negative, build their exterior coating wall that, over generations of mutations, has become impermeable to most antibiotics and also able to conceal the bacteria from the attack of its host (human) immune system. Scientists used the Diamond Synchroton facility in Oxfordshire, Oxford, which produces intense X-rays about 10 billion times brighter than that the light from the sun, to study crystalline forms of the isolated protein samples from the exterior of these bacteria at the atomic level. The result was an atomic-scale revelation of the structure of a protein complex called LptDE, in the cell wall of the bacteria. The detailed information gathered was then used to create models to simulate how this protein complex assembles molecules called lipopolysaccharide in the bacteria cell wall from the inside of the organisms; it was also found that the final stages of this assembly could be attacked from the outside using new antibiotics to shatter the whole assembly process and leave the bacteria exposed without a covering and vulnerable to the environment--the host immune system attack. One more good news is that the protein complex LptDE has been found to be almost the same across a broad range of gram-negative bacteria that cause a large number of diseases such as meningitis, meaning that designing a class potent antibiotics against this key structure could be the master key to treating these diseases. The way forward now, according to experts, is to start exploring this great opportunity to design novel drugs that can inhibit the mechanism of the protein complex, LptDE.

Diamond Light Source of the Synchroton Facility in Oxfordshire, Oxford.
Image credit to Diamond UK.
While this is a great basic and fundamental discovery and has brought much to hope for, isn't there a possibility that sustained offense against the LptDE mechanism (when we develop antibiotics against it) can trigger the need for these bacteria to undergo mutations that will alter some parts of the structure of the component proteins involved in the assembly work to render the designed antibiotics useless? There was a time when our current antibiotics were working wonders because they targeted what were found then as structures and mechanisms crucial to these microorganisms' survival; but the same crucial targets have become smart at adapting to our offenses.

Simulated model of the Lipopolysaccharide Assembly.
Image credit to Nature.
My point is that we've got to have many potent options (like I said in a similar post) at dealing with these microscopic bad guys. In addition to leveraging on this current discovery, and also embarking on a suggestion I made in a similar post, I believe there may be special areas in these microorganisms that are very vital to their survival and at the same time do not undergo mutations at the genetic level because any alterations in the molecular structure of these vital areas would destabilize the microorganisms. Efforts should be geared towards identifying these areas in the global MutaGenome Project-areas I will want to tag Rigidity Importance Sites in drug-resistant microorganisms because they are very important to their survival but do not undergo mutations no matter the changes in the organisms' environment. This will enable the development of drugs targeted towards the translational outcomes (protein structure) of these Rigidity Importance Sites (RIS) in the DNA of the microorganisms. And one way to do this could be by creating models of the genome of some of these microorganisms and try to simulate their genomic replication, transcription and translation using data gathered from accumulated laboratory investigations and all possible effects of environmental changes on their genome over several generations--this I believe may reveal these areas of the genome that hardly undergo mutations, irrespective of the extent of external threats, but are very very crucial to their survival. Drugs designed against these Rigidity Importance Sites will be extremely potent at eradicating these disease-causing niggers, and any attempt to develop resistance to the drugs by mutations will be fatally detrimental to them; hence, we have a double-edged sword against them.

And we'll keep on exercising our God-given fundamental rights to dominate over disease-causing microorganisms because there is hope and we are smarter than they are.

Monday 7 July 2014

Bypassing Needle-Dependent Insulin Therapy in Diabetic Patients.


Modern Digitized Insulin Pump
Image credit to Tandem Diabetes Care.
Two weeks ago, we had a counselling session in the clinic for children with Type 1 diabetes mellitus (a type of diabetes that totally depends on taking an artificial form of the normal insulin produced in the body to be able to stay healthy and alive) when I rotated through the Endocrinology Unit of our Paediatrics department; these patients came along with their family members, and a pharmaceutical company that manufactures artificial insulin was also invited. Our consultant endocrinologist headed the counselling session, educating and re-educating these paediatric patients and their families on the management of their medical condition--diabetes--through lifestyle modification (taking the appropriate food, drinks and so on) and appropriate use of the injectable insulin: how many times to inject themselves with insulin in a day; on ensuring they take some insulin shots before meals; and so on.

Digitized Insulin Pump linked to
a Health Management Software on
a PC for patients and Physicians.
Image credit to Tandem Diabetes Care
These children, I must say, were learning from these periodic sessions evidenced by how they gave very detailed accounts of what they have learnt and the risks of not adhering to the guidelines given to them. But worried me as I sat among my fellow medical students that day was the constant pricking these children would have to endure every day to take their insulin because the only insulin therapy still available in Nigeria currently is the injectable insulin (variations exist such as the insulin syringes and the insulin pen which the invited pharmaceutical company displayed and educated the patients on how to make use of). Aside this, even the insulin pumps (with all the newest modifications they have undergone) that are common in developed parts of the world still require the patient to insert the infusion set under the skin and carry it around (hence, the patient has to always be cautious about some activities in order not to disrupt the inserted infusion set of the pump which would dislodge the pump from the body and pose health risk to the patient).

Afrezza Technosphere Inhalable
Insulin. Image credit to Mannkind Corp.
The highlighted worry above is not just the problem faced by diabetic patients in Nigeria alone, but the world over. Though, research (such as artificial pancreas, pancreas transplant and so on) is intense to address this major problem of invasive insulin self administration, something immediate need to be done to reduce the need for needle pricking several times a day by people with diabetes, especially Type 1 (people with this type of diabetes may die if the level of sugar in their blood goes far above or below a certain level, and hence a standby insulin at all times is very essential). And it seems that there is hope (though for now not for Nigerians with diabetes) as the US Food and Drug Administration, FDA (the US version of Nigeria's NAFDAC), on the 27th of June this year approved an inhalable form of insulin called Afrezza designed by the US pharmaceutical company Mannkind Corporation, after the FDA advisory panel met in April this year and over 90% of the members voted in favour of the inhalable insulin, following data gathered from the clinical trials confirming its efficacy was carried out in over 3000 patients with both Type 1 and 2 diabetes (Afrezza is not the first attempt at making inhalable insulin: the pharmaceutical company Pfizer did come up with its own inhalable insulin called Exubera developed by a company Nektar Therapeutics far back in 2005 but the product was pulled out of the market in 2007 because of the lung problems that ensued in some users, the high cost and lack of benefit over the injected insulin). The FDA has mandated that Afrezza be subjected to post-market study to monitor possible long-term outcomes, one of which is the possibility of some patients having lung cancer from the use of the product.

The Afrezza inhalable insulin uses what its manufacturer calls the Technosphere technology (particles in
 powder form made up of biologically non reactive chemicals that carry the artificial insulin to the lungs once inhaled, and they completely separate from the insulin in the lungs to allow rapid absorption into the blood) to deliver inhaled insulin to the lungs where the insulin is absorbed rapidly into the blood, reaching maximum level between 15 and 20 minutes, hence preventing any imminent sugar overload of the blood, especially after meals. Afrezza inhalable insulin is contraindicated in patients who smoke or have asthma, or chronic obstructive lung diseases such as bronchiectasis.

The major setback though is that the inhalable insulin cannot replace the long-acting insulin needed by Type 1 diabetic patients, meaning that these patients still need to inject insulin, but probably once a day, while using the inhalable insulin before or a few minutes into their meals. Now, this is where something can also be done, maybe not immediate.

Women have the option of using the implantable contraceptives (which are inserted surgically, under local anaesthesia so that no pain is felt, deep into the skin of the inner part of the upper arm or thigh) which deliver artificial oestrogen and progesterone at rates required to prevent pregnancy for at least 3 years. Something similar, I think, can be done in the case of insulin: we can have insulin implants designed to release insulin at rates required for the basal level in these diabetic patients. This will replace the long-acting insulin injection and last for probably up to 3 years before it could be replaced; there is still pricking, but this time it is probably once in 3 years and then it is done under local anaesthesia, so the patient would not feel any pain. I believe work is ongoing on something like this.

Saturday 21 June 2014

Smart Home--Get Anything You Need with a Wave of the Hand.

Hand gesture control of your Smart Home. Image credit to
MIT Media Lab
In my last update I talked about how everything from goods to services is racking up innovative functionalities to earn the credibility of attaching the buzzword 'smart' to its name.

The concept of smart home has been around for some time now; but it has mainly focused on small-scale stuffs in the home like heaters with sensors; doors with smart security system; electronic monitoring of your house energy consumption; the use of green energy alternatives in cooking; and so on. But now the concept has been taken very farther up the ladder to involve the house itself that houses the home, being inspired by problems like scarcity of land in the urban areas, portability, mobility and environmental pollution. At the MIT School of Architecture and Planning, architects, civil engineers, city planners and other scientists are living their imagination of the future of housing.The MIT Media Lab arm of the School of Architecture and Planning has designed prototypes of what I will call super smart homes. One of the most interesting of these projects is the CityHome project.

The CityHome RoboWall Module. Image credit to
MIT Media Lab
Hand-gestured bedroom for some rest. MIT Media Lab
The CityHome project depends on a smart modular technology known as the RoboWall to provide the smart home experience. In simple terms, you rent a small room about 18 square metres and fit it with your customized RoboWall module--which is a transformable wall system that incorporates furniture, entertainment systems, kitchen setup, office equipment, library, storage, a home gym, home lighting, toilet and bathroom, and any other stuff that is found in a home--and then get whatever you want with a gesture of your hand: if you want to entertain guests you make the gesture and RoboWall transforms into the perfect sitting room for your guests; this sitting room can later be instructed by voice to reconfigure to a kitchen for cooking which can then be motioned to transform to a gym for a workout session, a bedroom for rest, an office suite or library for some serious business, and when you want to send some brown dudes down the pipe you gesture out the rest room . The RoboWall also enables two  purpose-serving sections like the kitchen opening into the living space if you want to shuttle between the two when you are busy with some chores and cooking at the same time (an analogue of multitasking which I call MULTICHORING); or the kitchen can be gestured to close off if you just needed to grab a pack of cookies and a bowl of ice cream from the fridge once and focus on an interesting TV program. This smart functionality of gesture-controlled home reconfiguration and movement makes it possible to live a 74-square metre apartment experience in an 18-square metre space with the RoboWall.

CityHome enables you to do MULTICHORING,
including sending brown dudes down the pipe.

Some serious business.
The CityHome project is still in the stage of prototype, making it a futuristic solution for the already emerging problems in mega cities around the world such as scarcity of building space; overcrowding; climate change from carbon emission due, in part, to high energy consumption in our homes whose waste is not recycled; and so on. But even when its need becomes utmost in the future, it will likely, initially, be very expensive for the average income earner hoping to get an apartment of his or hers. However, with time, I think it will come to stay like smartphones just that there may be something like HIGH-END SMART HOME AND LOW-END SMART HOME MODULES; hence, the majority gets to own a smart home modular apartment, but with some having less functionality than the others.
The MIT media Lab. Image credit to MIT media Lab





Thursday 5 June 2014

Smart Cars and Preventing Accidents.

Since the first use of the buzzword 'smart' for electronic devices such as phones and tablets (based on the enhanced function of these devices as technology waxes stronger), many other entities including services (transport, services in healthcare, insurance,shopping and so on), relying on the power of super-computing technology, are making efforts to get it attached them also. We have smart TVs, smart watches, smart shopping, smart almost everything, etc.

Radar Traffic detector. Image
credit to Radar Detector
But the word 'smart' means being able to make informed decision on solutions (choosing the best set of solutions from a myriad of solutions-this requires tremendous permutations and combinations from an already acquired database of experience, facts and statistics) when presented with complex problems; and a handful in various categories of electronic devices has been able to live up to this high expectation, with smartphones being the first on the list. Google, Samsung and other tech giants have come up with things like the Google Glass, smart watches, with Google planning to bring self-driving cars to the market in some years' time.

But even before we have self-driving cars--a smart ability in cars-there is already a handful of capabilities being built into the new generation cars to justify the use of the word 'smart car'. We have cars with TVs; internet facility and many of the things that come with having internet access--GPS ( global positioning system), to navigate one's way through an unknown territory using one's car; Bluetooth synchronicity to connect your smartphone to your car and hence enable you to answer calls or take text messages from the smartphone hands-free; electronic database of your car's full functionality; and so many others.

Cognitive Safety System in a car.
Image credit to Autoevolution
These are great stuffs; but what caught my attention recently in the 'new generation' enhancements being added, and hence would qualify, for me, cars to have the 'smart' buzzword added to them, is the so-called cognitive safety system. The cognitive safety system is a technology that uses radars, video cameras and other sensor systems built into cars to obtain real-time data on the traffic of any place, analyse road traffic accident data archive of such a place, reconstruct and simulate these accidents and analyse the various safety measures taken to avert the accidents; it then optimizes the gathered information to construct the best set of accident-averting solutions at all times and in situations of unavoidable collisions.

Driver Assist Radar technology.
Image credit to EE Times
These measures include the Autonomous Emergency Braking system which uses the synthesized data and brake pressure in the car to give it maximum braking to avoid a collision or reduce the severity of impact in unavoidable collision, with or without the driver's effort; Driver Assist system which uses the same data to guide the driver on accurate steering, braking and so on.

Some of these stuffs are still in the final stages of development; hence, the future of our driving is definitely accident-free bright as requirements for the general acceptability of the terms 'smart cars' and 'smart driving' are being met one after the other each day we wake up.

Monday 19 May 2014

How much Data can be Stored in so Small a Drive?

Last month, the Millennium Technology Prize for 2014 was awarded to Prof. Stuart Parkin of International Business Machines Corporation (IBM) for his work on disk drive storage technology. The Millennium Technology Prize is awarded every two years to scientists who have made technological inventions that on a global scale have improved people's lives or have the prospects of doing so. And the award of this year's Prize to disk storage technology evoked in me the question of how much data can be stored in the smallest of a single drive and the expectation of more groundbreaking storage technologies in the future.
Prof. Parkin giving a speech at
the Award Ceremony. Image credit to Xinhuanet

Computer processors are getting smaller and faster each year-something that aligns with the so-called Moore's Law which predict that processors in computers will become smaller as they get faster; and this has applied for over four decades: we have supercomputer nanoprocessors doing hundreds of millions and billions of computations that totally dwarf their size today, unlike what was obtained back in the 1950s when computers and hard drives with memory capacity of 1 megabyte could take the space of a whole bedroom.
 
While the Moore's Law has held over the years for computers, I think an analogy of it has also been in motion over the years in the area of data acquisition and storage technology. Storage technology has got smarter over the years, with disk drives reducing drastically in size while their capacity to store data increases exponentially. This was made possible because of the fundamental works that have been done and leveraged on in the field of electromagnetic and quantum physics.

In the early years of the 20th century, scientists discovered that they can harness the charges of electrons in a magnetic field to store bits of information (bit is the smallest unit of information that can be stored; and 8 bits are equal to 1 byte). This means that information is stored in a disk drive as tiny magnetic regions in a magnetic film and read by converting the magnetic change (information signals-audio and video) in the film into electrical current (depends on electron charge). While a lot of information could potentially be stored in these tiny magnetic regions in the magnetic film of disk drives, that did not happen because writing and reading vast amount of data on small disk drives posed a challenge: these tiny magnetic regions got weaker as the size of the hard disk drives reduced requiring very sensitive reading device to read them, especially at room temperature.
L-R: Profs. Albert Fert and Peter Grunberg
at Nobel Prize Interview. Image credit Nobel Prize

GMR structure. Image credit to
Magnet Lab
But as much insight was being gained in the area of quantum mechanics, scientists began to explore additional, miniature, properties of the electron such as its spin property when there is a change in direction of a magnetic field. In 1988, Albert Fert and Peter Grunberg independently and simultaneously discovered what is known as Giant MagnetoResistance (GMR), in which there is a profound change in electrical resistance in a thin film structure made of several layers of ferromagnetic and non-magnetic conductive materials (Professors Albert Fert and Peter Grunberg who won the 2007 Nobel Prize in Physics for the discovery of Giant
 MagnetoResistance); this phenomenon was immediately found to be very useful in the area of hard disk drives and biosensors, as the very tiny magnetic changes in tiny magnetic regions where information is stored can cause significant change in electrical resistance in any GMR structure it comes into contact with.

Spin-valve sensor. Image credit to
Wikipedia
This was where Professor Stuart Parkin came in. By the early 1990s, while working at IBM, he found a way to manipulate the spin-up and spin-down property of the electrons in the Giant MagnetoResistance sensor depending on the magnetic field direction of its multilayered materials to generate a spin-polarized current that could be turned on or off. This allowed him to design a type of valve that served as a read-out head ( a device that detects audio or video signals or any other data (by converting the weak magnetic regions into electrical current via large changes in electrical resistance in the GMR part of the valve) when flown over the magnetic films of hard disk drives), thus allowing for far greater amount of data to be written to and stored in hard disk drives than was possible before the invention. The spin-valve sensor was used to build the first 16 GB hard drive by IBM in 1997; and today the technology has allowed the design of hard disk drives with up to 6 Terabytes storage capacity.

Google's Data Center in Hamina, Finland.
Image credit to Financial Post.
Professor Parkin's research opened up a new branch in quantum physics called Spintronics which explores how the spin of electrons could be harnessed and applied in different areas in the  field of computing which will in turn have virtually limitless applications in any area of human endeavour that requires technology--and in this age we are in every human endeavour has something to do with technology to witness rapid transformation. His spin-valve sensor has undergone several modifications to make it much better and adaptive to the computing demands of today's processes. Companies like Google, Facebook, iTunes and Amazon whose services (searches, streaming music and videos online, looking for friends, shopping online) respond with several possible options even before we completed the clicks would not have been what they are today
without Professor Parkin's spin-valve technology because these highly personalized services depend on mined data on consumers' (you and I) behaviour when online, and which need to be stored for processing and profiling--and these companies have huge data storage centres with thousands of hard disk drives with the spin-valve technology.
A micro hard drive.
Image credit to IBM

Many other entities such as telecommunication companies, which store consumers' call data for a period of time; website hosting companies and so on also depend on the spin-valve technology for huge data storage capacity. As well, national security organisations such as the US National Security Agency (NSA) which mine and store data on people's and organisations' calls, text messages, emails, Skype calls, internet searches, credit card information, financial records and so on for security profiling AND SO ON depend on Parkin's innovation. In fact, the NSA recently commissioned a one million square-foot data centre in Utah called Bumblehive which has a data storage capacity of one yottabyte which is equal to one thousand trillion gigabyte.

But one million square feet is a huge land space just for data storage considering the ever growing housing demand, achieving environmental efficiency and much more. Can this one yottabyte storage capacity be squeezed into a smaller space? There is hope I suppose as IBM and other groups are working to pack more data in a smaller drive. Scientists at the Agency for Science, Technology and Research, Singapore are simulating models of what they call single grain-based magnetic recording and storage. Current hard drives store information (one bit) in magnetic regions which are like aggregates of grains in a magnetic film; but their model aims to store each bit of information in one grain instead of multiple grains. This would increase the storage capacity, according to their estimates, to 10 terabytes per square inch. If achieved, there would be hard disk drives of up to 15 Terabytes in storage capacity. This means an increase in the size of current cloud-based services offered by companies like Dropbox, Microsoft (SkyDrive) and Google (Drive) at the same or even lower price; the price of other services like web hosting will also significantly reduce, meaning businesses, especially those in the developing parts of the world will flourish as the cost of maintaining an online presence falls. And as Professor Parkin's continues to make further improvement to this groundbreaking innovation of his-- the latest being what he calls Racetrack memory in which he is working to exploit spintronics to create a new type of storage that will consume less energy and still be able to store as much data as magnetic disk drives--many more attendant waves of benefits are expected to cause ripples across the large waters of human growth and development in the nearest future of storage technology