ARTIFICIAL INTELLIGENCE , brain and health

ARTIFICIAL INTELLIGENCE ARTICLES INVOLVING PROGRAMMING, SOFTWARE ENGINEERING, NEURAL ENGINEERING, ARTIFICIAL NEURAL NETWORKS, ARTIFICIAL LIFE, A-LIFE, FLOYDS, BOIDS, EMERGENCE, COMPUTERS, MACHINE LEARNING, NEURALBOTS, NEURALROBOTICS, ENGINEERING, COMPUTATIONAL NEUROSCIENCE AND MUCH MORE. JOIN UP AND HELP ROBOTICS RESEARCH.

a brain

MACHINE LEARNING UNCOVERS DEMENTIA SUBTYPES

A new machine learning algorithm is able to automatically able to distinguish between different patterns of progression in patients with a range of different dementias. The algorithm could enable to identify which treatments are most suited to individual patients. READ MORE…
a news paper

DETECTING FAKE NEWS, AT ITS SOURCE

Researchers have created a new deep learning system that can determine if a news outlet is accurate or biased based on only 150 articles published. The algorithm can also detect the political leanings of a news site. Researchers say fake news articles are more likely to use language that is hyperbolic, subjective and emotional. READ MORE…
clocks

WHAT TIME IS IT IN YOUR BODY?

A new blood test can help identify your body’s precise internal time clock in relation to external time. Researchers say the test could help examine the impact of misaligned circadian clocks in a wide range of diseases. READ MORE…

Artificial Intelligence in Healthcare by RockHealth

AI in healthThese three categories—expert systems, traditional machine learning, and frontier machine learning—are organized along this spectrum according to two distinguishing attributes:

  • Their autonomy as assessed by the degree of human guidance they require to function
  • Their explainability—meaning the degree to which humans can examine how an algorithm is coming to a particular prediction or output2

These attributes are inversely related: More autonomous and fine-tuned algorithms require less human guidance; however, it’s more difficult to understand what the computer is doing and why.

Don’t assume “moving to the right” on the spectrum is optimal. More advanced algorithms aren’t always better. Most companies should be assessing and using a variety of techniques. For example, Amino, a San Francisco-based company with an online platform that provides healthcare provider recommendations, cost transparency, and appointment booking, is constantly testing algorithms to find the optimal mix of techniques. They recently tested deep learning algorithms to surface trends on physician specialties and evaluated their technique against two guiding questions:

  • What degree of accuracy is necessary to make the product successful?
  • What is the incremental improvement from using a more expensive, sophisticated method? Is a simpler technique available?

Against this framework, they decided the added specificity from the deep learning techniques was not worth the added cost in terms of the development time and computing resources needed. Every company using AI/ML should demonstrate an iterative, flexible, yet rigorous mindset in which they seek a desired level of predictive power using the simplest, most affordable techniques available. Investors, enterprise leaders, and others evaluating AI/ML-powered startups can use the Spectrum of Algorithms to guide conversations about the techniques each startup is using, and the utility and intent of using those particular algorithms.

Artificial Intelligence in Health Care

This slideshow requires JavaScript.

Can we reduce chronic care costs? Can we prevent chronic care? Can we be proactive in our health care? Can we identify potential health issues early? Can we diagnose faster and early using artificial intelligence?

http://www.qmed.com/mpmn/gallery/image/tapping-chronic-conditions

Google’s AI Reads Retinas to Prevent Blindness in Diabetics

GOOGLE’S ARTIFICIAL INTELLIGENCE can play the ancient game of Go better than any human. It can identify faces, recognize spoken words, and pull answers to your questions from the web. But the promise is that this same kind of technology will soon handle far more serious work than playing games and feeding smartphone apps. One day, it could help care for the human body.

Demonstrating this promise, Google researchers have worked with doctors to develop an AI that can automatically identify diabetic retinopathy, a leading cause blindness among adults. Using deep learning—the same breed of AI that identifies faces, animals, and objects in pictures uploaded to Google’s online services—the system detects the condition by examining retinal photos. In a recent study, it succeeded at about the same rate as human opthamologists, according to a paper published today in the Journal of the American Medical Association.

“We were able to take something core to Google—classifying cats and dogs and faces—and apply it to another sort of problem,” says Lily Peng, the physician and biomedical engineer who oversees the project at Google.

But the idea behind this AI isn’t to replace doctors. Blindness is often preventable if diabetic retinopathy is caught early. The hope is that the technology can screen far more people for the condition than doctors could on their own, particularly in countries where healthcare is limited, says Peng. The project began, she says, when a Google researcher realized that doctors in his native India were struggling to screen all the locals that needed to be screened.

In many places, doctors are already using photos to diagnose the condition without seeing patients in person. “This is a well validated technology that can bring screening services to remote locations where diabetic retinal eye screening is less available,” says David McColloch, a clinical professor of medicine at the University of Washington who specializes in diabetes. That could provide a convenient on-ramp for an AI that automates the process.

Peng’s project is part of a much wider effort to detect disease and illness using deep neural networks, pattern recognition systems that can learn discrete tasks by analyzing vast amounts of data. Researchers at DeepMind, a Google AI lab in London, have teamed with Britain’s National Health Service to build various technologies that can automatically detect when patients are at risk of disease and illness, and several other companies, including Salesforce.com and a startup called Enlitic, are exploring similar systems. At Kaggle, an internet site where data scientists compete to solve real-world problems using algorithms, groups have worked to build their own machine learning systems that can automatically identify diabetic retinopathy.

Medical Brains

Peng is part of Google Brain, a team inside the company that provides AI software and services for everything from search to security to Android. Within this team, she now leads a group spanning dozens of researchers that focuses solely on medical applications for AI.

The work on diabetic retinopathy started as a “20 Percent project” about two years ago, before becoming a full-time effort. Researchers began working with hospitals in the Indian cities of Aravind and Sankara that were already collecting retinal photos for doctors to examine. Then the Google team asked more than four dozen doctors in India and the US to identify photos where mini-aneurysms, hemorrhages, and other issues indicated that diabetic patients could be at risk for blindness. At least three doctors reviewed each photo, before Peng and team fed about 128,000 of these images into their neural network.

Ultimately, the system identified the condition slightly more consistently than the original group of doctors. At its most sensitive, the system avoided both false negatives and false positives more than 90 percent of the time, exceeding the National Institutes of Health’s recommended standard of at least 80 percent accuracy and precision for diabetic retinopathy screens.

Given the success of deep learning algorithms with other machine vision tasks, the results of the original trial aren’t surprising. But Yaser Sheikh, a professor of computer science at Carnegie Mellon who is working on other forms of AI for healthcare, says that actually moving this kind of thing into the developing world can be difficult. “It is the kind of thing that sounds good, but actually making it work has proven to be far more difficult,” he says. “Getting technology to actually help in the developing world—there are many, many systematic barriers.”

But Peng and her team are pushing forward. She says Google is now running additional trials with photos taken specifically to train its diagnostic AI. Preliminary results, she says, indicate that the system once again performs as well as trained doctors. The machines, it seems, are gaining new kinds of sight. And some day, they might save yours.

A healthy dose of machine and cognitive technology for health care

Short takes…on Analytics

A blog by Dan Housman, director, Deloitte Consulting LLP

 Fantasy? Fiction? The future?

Health care isn’t the only industry realizing the challenges and benefits posed by advances in cognitive technologies, machine learning, and artificial intelligence (AI). But it is an industry quickly leveraging these cutting-edge advances, especially in the areas of research, diagnostics, treatment, and patient outcomes. Alarmists worry that smart machines will replace highly skilled practitioners in everyday health care encounters. They fear a depersonalized and invasive experience that could interfere with their health and personal lives.

Fantasy? Fiction? The future? In 2013 the movie “HER” was released in theaters featuring an AI operating system able to relate and integrate itself into the main character’s life so completely that he fell in love with her. With a year to reflect on this–and increasingly with an earpiece or headphones already in my ear from my phone or computer–I am convinced that the future of patient health will engage me like the computer in HER.

Humans and machines: Better together

Picture Scarlett Johansson whispering in your ear, telling you not to eat that second donut. She will know your behaviors, sensor outputs, preferences, and will offer useful whispers of wanted suggestions to keep your health on track. She will suggest that you grab an apple when you are about to eat that second donut, find the right messages that motivate you to exercise after the holidays, and remind you not just of which medications you forgot to take–but also of how important it is to take your medication daily.

In truth, smart machines aren’t likely to take over our lives, but advances in AI, cognitive, and machine technologies will augment human thinking and specialized skill sets. This should be welcome news to patients, many of whom enjoy a love-hate relationship with technology at home, but could benefit from those same technological advances to improve their quality of health care.

An apple a day–plus machine learning and artificial intelligence (AI)

Patients realize that their electronic devices help them with their day-to-day lives, including their health care consumer products, such as fitness bands. As a consumer, I am concerned with the “pain points” of health care, including my interactions with health care professionals, convenience, utility, and price. A health coach that is neither disruptive nor burdensome to my world, and highly personalized to me, is the ultimate expression of a consumer experience. An AI avatar can provide this.

We are at the dawn of yet another AI era, equivalent to the integration of multiple devices into a single smartphone. The applications of cognitive computing are about to assemble themselves into solutions that will march rapidly towards my best friend, my AI health advisor.

For example, the application Lark is now on the market as a health coach that chats with you on the phone. It chats using advanced learning and presents information against the context of your daily experience. You don’t pick from a complex list of foods to represent your lunch. You enter something in free text, just as you would text a friend. Another example: Cognitive Scale has constructed a health application called Cognitive Concierge focused on specific conditions. It uses a cloud approach to data, absorbing it from many facets and recommending insights on the user’s condition and the environment. So if you have asthma or chronic obstructive pulmonary disease (COPD), Cognitive Concierge knows to warn you when there is a high pollen count. It can be customized and deployed by health systems to integrate into their care management processes.

Taking the pulse of cognitive apps

Machine learning is also providing extensions to physicians’ ability to interpret images with viewing diagnostics such as medical imaging. Enlitic is using advanced machine learning to find signals in medical images that radiologists might miss. Unlike machines, the human mind cannot effectively look across all images of all patients and identify critical patterns. Welltok and Watson Health are also heavily investing in the generation of cognitive applications, with early interest in high stakes decisions, such as helping to review protocol selection options for oncologists. The race is on to make an advisor that patients will welcome into their world.

Other applications available today are cognitive tools that are working behind the scenes to match content with need. For example, the ability to offer education or entertainment to an individual is being adapted to optimize prioritization of the videos that can help patients better understand a disease based on their level of understanding, and at what stage they are battling the disease. Some offer cartoons illustrating how protected their cells are based on their adherence to HIV medication regimens.

Breaking through adoption barriers

Roadblocks to reaching the state of machine nirvana are the many concerns about ethics, risk, and compliance. But compliance will rapidly become the space of cognitive computing. Let’s look at the banking industry for illustration. How does a global bank determine that their thousands of locations are in compliance with global, regional, and local legal requirements regarding operating procedures? They either need to have an army of people reading every legal document and every internal policy for discrepancies, or they have to train a cognitive assistant to help highlight where potential gaps occur, and then use humans to confirm gaps and figure out how to remediate issues.

Just as cognitive computing is taking center stage for the banking industry, it will take center stage for health care, helping to address issues around privacy and compliance with Health Insurance Portability and Accountability Act (HIPAA) requirements. These compliance rules will be embedded into the AI that communicates with patients about their health. As a result, reporting of adverse events and dangerous health situations can be streamlined, getting the information to qualified professionals who can mitigate issues quickly.

Put me in, coach!

So here is a summary of why my Cognitive AI health coach is coming and accelerating as it comes…

  1. The patient wants to be engaged in his or her preferred context and not have to actively seek out health and behavioral information
  2. Gleaning the patient’s intent and reality of mood will best come from hearing things he or she says and being able to cognitively process speech vs. asking for complex forms to be completed
  3. Knowledge bases are consolidating in clouds that an AI assistant can pull from
  4. Sensor devices are everywhere, and only an AI can really make sense of them
  5. Speech recognition is coming online for hearing what we are saying
  6. Image recognition technology needed to identify useful information in our world is progressing rapidly
  7. The large volume of potential recommendations for non-critical decisions will be a ripe place to filter using a cognitive AI
  8. The rest of consumer components are going down this path, so health care would be well advised to piggy back on the progress of these other industries
  9. Compliance issues are going to be some of the first cognitive use cases, so this important area will be embedded into the Cognitive AIs

health concierge project planning ppt

Email Connie at motherhealth@gmail.com to participate as health consumer-seed funder/partner in defining health concierge with Motherhealth’s new site coming soon at avatarcare.net

seed motherhealth

Definition: HIPPA is applying health consumer data privacy in storage and in transit mode thru anonymyty. We shall not collect social security numbers, birthdays, full name and mailing address.

We need health data to be shared to find data insights that can speed up health curation and health cures.

Neural networks to learn directly from human-defined rules

Summary: According to researchers, a new machine learning training method can enable neural networks to learn directly from human-defined rules.

Source: University of Toronto.

A new machine learning training method developed at U of T Engineering enables neural networks to learn directly from human-defined rules, opening new possibilities for artificial intelligence in fields from medical diagnostics to self-driving cars.

“Hey Siri, how’s my hair?”

Your smartphone may soon be able to give you an honest answer, thanks to a new machine learning algorithm designed by U of T Engineering researchers Parham Aarabi and Wenzhi Guo.

The team designed an algorithm that learns directly from human instructions, rather than an existing set of examples, and outperformed conventional methods of training neural networks by 160 per cent. But more surprisingly, their algorithm also outperformed its own training by nine per cent — it learned to recognize hair in pictures with greater reliability than that enabled by the training, marking a significant leap forward for artificial intelligence.

Aarabi and Guo trained their algorithm to identify people’s hair in photographs — a much more challenging task for computers than it is for humans.

“Our algorithm learned to correctly classify difficult, borderline cases — distinguishing the texture of hair versus the texture of the background,” says Aarabi. “What we saw was like a teacher instructing a child, and the child learning beyond what the teacher taught her initially.”

Humans “teach” neural networks — computer networks that learn dynamically — by providing a set of labeled data and asking the neural network to make decisions based on the samples it’s seen. For example, you could train a neural network to identify sky in a photograph by showing it hundreds of pictures with the sky labeled.

This algorithm is different: it learns directly from human trainers. With this model, called heuristic training, humans provide direct instructions that are used to pre-classify training samples rather than a set of fixed examples. Trainers program the algorithm with guidelines such as “Sky is likely to be varying shades of blue,” and “Pixels near the top of the image are more likely to be sky than pixels at the bottom.”

Their work is published in the journal IEEE Transactions on Neural Networks and Learning Systems.

Image shows a woman with long, red hair.

This heuristic training approach holds considerable promise for addressing one of the biggest challenges for neural networks: making correct classifications of previously unknown or unlabeled data. This is crucial for applying machine learning to new situations, such as correctly identifying cancerous tissues for medical diagnostics, or classifying all the objects surrounding and approaching a self-driving car.

“Applying heuristic training to hair segmentation is just a start,” says Guo. “We’re keen to apply our method to other fields and a range of applications, from medicine to transportation.”

ABOUT THIS AI RESEARCH ARTICLE

Source: Marit Mitchell – University of Toronto
Image Source: NeuroscienceNews.com image is credited to IEEE Trans NN & LS.
Original Research: Abstract for “Hair Segmentation Using Heuristically-Trained Neural Networks” by Wenzhangzhi Guo and Parham Aarabi in IEEE Transactions on Neural Networks and Learning Systems. Published online October 18 2016 doi:10.1109/TNNLS.2016.2614653

CITE THIS NEUROSCIENCENEWS.COM ARTICLE
University of Toronto. “New AI Algorithm Taught By Humans Learns Beyond Its Training.” NeuroscienceNews. NeuroscienceNews, 16 November 2016.
<http://neurosciencenews.com/ai-human-learning-5534/&gt;.

Abstract

Hair Segmentation Using Heuristically-Trained Neural Networks

We present a method for binary classification using neural networks (NNs) that performs training and classification on the same data using the help of a pretraining heuristic classifier. The heuristic classifier is initially used to segment data into three clusters of high-confidence positives, high-confidence negatives, and low-confidence sets. The high-confidence sets are used to train an NN, which is then used to classify the low-confidence set. Applying this method to the binary classification of hair versus nonhair patches, we obtain a 2.2% performance increase using the heuristically trained NN over the current state-of-the-art hair segmentation method.

“Hair Segmentation Using Heuristically-Trained Neural Networks” by Wenzhangzhi Guo and Parham Aarabi in IEEE Transactions on Neural Networks and Learning Systems. Published online October 18 2016 doi:10.1109/TNNLS.2016.2614653