Machine learning is a buzzword in the technology world right now, and for good reason: It represents a major step forward in how computers can learn.
Very basically, a machine learning algorithm is given a “teaching set” of data, then asked to use that data to answer a question. For example, you might provide a computer a teaching set of photographs, some of which say, “this is a cat” and some of which say, “this is not a cat.” Then you could show the computer a series of new photos and it would begin to identify which photos were of cats.
Machine learning then continues to add to its teaching set. Every photo that it identifies — correctly or incorrectly — gets added to the teaching set, and the program effectively gets “smarter” and better at completing its task over time.
Machine learning algorithms can process more information and spot more patterns than their human counterparts.
The Institute of Medicine at the National Academies of Science, Engineering and Medicine reports that “diagnostic errors contribute to approximately 10 percent of patient deaths,” and also account for 6 to 17 percent of hospital complications. It is important to note that physician performance is typically not the direct cause of diagnostic errors. In fact, researchers attribute the cause of diagnostics errors to a variety of factors including: Inefficient collaboration and integration of health information technologies (Health IT) Gaps in communication among clinicians, patients and their families A healthcare work system which, by design, does not adequately support the diagnostic process To provide additional context, a review of 25 years of malpractice claims payouts in the U.S. by Johns Hopkins researchers showed that diagnostic error claims had a higher occurrence in outpatient (68.8 percent) vs. inpatient (31.2 percent) settings. However, those which occurred in an inpatient setting were roughly 11.5 percent more likely to be lethal. Total payouts over the 25 period amounted to a substantial sum of $38.8 billion. One study used computer assisted diagnosis (CAD) when to review the early mammography scans of women who later developed breast cancer, and the computer spotted 52% of the cancers as much as a year before the women were officially diagnosed. Additionally, machine learning can be used to understand risk factors for disease in large populations. The company Medecision developed an algorithm that was able to identify eight variables to predict avoidable hospitalizations in diabetes patients. To address these challenges many researchers and companies are leveraging artificial intelligence to improve medical diagnostics.
Stanford University researchers have trained an algorithm to diagnose skin cancer using deep learning, specifically deep convolutional neural networks (CNNs). The algorithm was trained to detect skin cancer or melanoma using “130,000 images of skin lesions representing over 2,000 different diseases.” In the U.S., there are approximately 5.4 million new skin cancer diagnoses each year and early detection is critical for a greater rate of survival. For example, early detection correlates with a 97 percent five-year survival rate but quickly decreases with later stages, hitting the 15-20 percent margin at stage IV. In 2017, an estimated 9,730 people will die of melanoma and one person dies of melanoma every 54 minutes. To provide context, a visual examination is the first step of a skin cancer diagnosis and a dermatologist inspects a lesion of interest with the assistance of a dermatoscope (handheld microscope). If the dermatologist believes the lesion is indeed cancerous, or if the initial evaluation is inconclusive, the dermatologist will follow up with a biopsy. Stanford’s deep learning algorithm was tested against 21 board-certified dermatologists who reviewed a reported 370 images and were asked if “they would proceed with biopsy or treatment, or reassure the patient” based on each image. Results showed that the algorithm had the same ability as the 21 dermatologists in determining the best course of action across all images. Skin Cancer Deep Learning. These are promising results, however the research team acknowledges that additional, rigorous testing is required before the algorithm can be integrated into clinical practice. Our research did not provide evidence of any clinical applications at this time.
ML5.js is based on the very powerfull Machine Learning engine Tensor Flow made by Google. ML5.js has a feature to classifie image: ImageClassifier() We can use neural networks to recognize the content of images. ml5.imageClassifier() is a method to create an object that classifies an image using a pre-trained model. We ll do this to classifie images of melanome
People talking on Github about how to do image classification with ML5.js