Archive for the ‘Artificial Intelligence’ Category

The Doctor Game: Artificial intelligence to help avert blindness – The Westerly Sun

How can doctors diagnose and treat 425 million worldwide diabetes patients? That number keeps going up and up, projected to reach 700 million by 2045. There are millions more with undiagnosed prediabetes. Add more millions with undiagnosed hypertension. All these people are destined to lives defined by cardiovascular problems and complications that include debilitating conditions like blindness. Diabetes is swamping health care systems worldwide. Let us be clear: whatever we have been doing to fight the problem, it is not working.

But now, Artificial Intelligence (AI) is offering new possibilities. Using new technologies, data science, vast quantities of medical images, and computer algorithms, it is possible to fight diseases differently. The medical model of a patient and a doctor is outdated. We need to put AI on our health care team and use analytical methods to predict problems before they occur and to help doctors and patients make better decisions.

Computer-assisted retinal analysis (CARA) is one such technology. Developed by DIAGNOS, a Montreal-based company, CARA uses retina scans to detect early warning signs of big health problems. And CARA can do it on a scale that will make a big difference in fighting the diabetes epidemics.

The retina, the back part of the eye, is the only area of the body where doctors can easily see the condition of arteries and veins without invasive procedures. Early detection of atherosclerosis (hardening of arteries) in the retinas of diabetes patients signals a warning that the same problem is occurring in coronary arteries. This is why the retina is called, the window to the heart.

Prevention is always better than cure. But this is easier said than done in many parts of the world where highly trained retinal specialists are in short supply. We are more fortunate in North America, but retinal checkups are mainly the purview of ophthalmologists focused on your eyes, not your cardiovascular system.

Type 2 diabetes has become a worldwide epidemic and an expensive problem for every health care system. Type 2 diabetes is not just a singular disease. Rather, by triggering atherosclerosis, it decreases blood supply to many parts of the body with catastrophic results. For example, longstanding diabetes increases the risk of blindness, heart attack, and kidney failure, which may require renal dialysis or a kidney transplant.

Doctors can only treat so many patients. So this problem is an example of where we can leverage technology to screen of millions of people. CARA can scan an eye in two seconds. Furthermore, it can scan hundreds of patients for hours without getting tired or making errors. We need to use AI to detect retina changes and prevent diabetes averting countless cases of blindness and other problems, improving lives, and saving dollars.

Andre Larente, president of DIAGNOS, recently remarked, CARA can now look at a patients retina, discover the presence of hypertension and predict a chance of stroke in 12 to 24 months. Given that CARA can do this across very large populations of patients, at low cost, its easy to see the appeal of this technology from a health care and economic perspective, not to mention the incentive to individual patients to reduce their risk profile.

Theres no doubt that the capacities of artificial intelligence are changing the way we can fight illness, and companies like DIAGNOS are important partners in medical practice. The key is in scaling up. CARA has accumulated a vast database of retinal photos of patients worldwide. This data can be used for predictive modeling. So the next step will be in getting this data into the hands of those who can take steps to stop the progression of illness, change conditions leading to disease, and prevent these avoidable health problems in the first place.

Dr. W. Gifford-Jones, aka Ken Walker, is a graduate of the University of Toronto and Harvard Medical School. You can reach him online at his website, docgiff.com, or via email at info@ docgiff.com.

View post:
The Doctor Game: Artificial intelligence to help avert blindness - The Westerly Sun

VisionQuest Biomedical Inc. and The University of New Mexico Combine Artificial Intelligence and Infrared Imaging to Diagnose Early Signs of Diabetic…

ALBUQUERQUE, N.M.--(BUSINESS WIRE)-- VisionQuest Biomedical Inc. and theUniversity of New Mexico School of Medicine have been awarded a three-year $3 million grant from the National Institute Of Diabetes And Digestive And Kidney Diseases (NIDDK), part of the National Institutes of Health (NIH), to complete the clinical validation of a new technology to detect early signs of diabetic peripheral neuropathy (DPN), also known as diabetic foot.

Dr. Peter Soliz (PhD), founder and chief technology officer of VisionQuest, says, Our patented technology for detecting early signs of peripheral neuropathy will fundamentally change how physicians manage this severe complication of diabetes. This system will complement our already successful EyeStar system for the detection of diabetic retinopathy and will allow us to screen for multiple diabetes complications in one visit. Dr. Mark Burge (MD), deputy director of the University of New Mexicos Clinical & Translational Science Center (CTSC) and coprincipal investigator on this project, adds, A simple test that can be performed by the primary care physician in the clinic and which is highly sensitive and specific does not currently exist. This device will fill an important gap in providing comprehensive care to individuals diagnosed with diabetes.

Diabetes affects 34.2 million people in the United States, or 10.5 percent of the population. The Foundation for Peripheral Neuropathy estimates that over 70 percent of people diagnosed with diabetes have developed DPN, a painful complication of diabetes that leads to loss of sensation, foot ulcers, and nearly 54,000 amputations per year.

Current screening methods cannot reliably detect the early stages of DPN, when preventative care can improve outcomes. VisionQuests fully automated, noninvasive system analyzes real-time thermal video of changing temperatures on the bottom of the foot to produce highly sensitive and consistent measurements of blood flow that can be used for diagnosis in primary-care clinics and do not require interpretation. The device and technique were awarded a US patent in 2014.

Through this NIDDK grant, VisionQuest will complete the clinical validation needed to pursue clearance by the Food and Drug Administration (FDA) to bring the device to market in the United States. Now more than ever, VisionQuest is committed to expanding access to health care for populations living with chronic diseases in the United States and around the world.

About VisionQuest Biomedical Inc.: VisionQuest develops and delivers innovative artificial intelligencebased imaging technologies that increase access to health care for the people who need it the most. We serve patients and providers in the most efficient and cost-effective ways possible. Dr. Soliz founded VisionQuest in 2007 to develop AI techniques that could be used by health-care professionals to evaluate digital medical photographs, specifically retinal images that showed evidence of diabetic retinopathythe most common complication of diabetes and the leading cause of blindness in the working-age populationand other pathologies. In the United States, VisionQuest has established a network of clinics in which to study computer-based detection of retinal pathologies. In Mexico our EyeStar software is used to screen patients for diabetic retinopathy. In the sub-Saharan country of Malawi, VisionQuest is applying retinal screening to the detection of malarial retinopathy.

About The University of New Mexico Clinical & Translational Science Center: UNMs CTSC supports high quality collaborative translational science locally, regionally, and nationally; fosters scientific and operational innovation to improve the efficiency and effectiveness of clinical translational research; and creates, provides, and disseminates domain-specific translational science training and workforce development. It is committed to bettering health by streamlining science, transforming training environments, and improving the conduct, quality, and dissemination of research from laboratories to clinical practice, and out into communities. This prestigious designation ensures New Mexico remains a leader in the biomedical research field. It also fuels our culture of scientific discovery and its impacts on health.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200601005198/en/

The rest is here:
VisionQuest Biomedical Inc. and The University of New Mexico Combine Artificial Intelligence and Infrared Imaging to Diagnose Early Signs of Diabetic...

Federal Judge Orders National Security Commission on Artificial Intelligence to Open Meetings and Records to Public – Law & Crime

U.S. District Judge Trevor N. McFadden

In a victory for government transparency advocates, a federal judge on Monday ordered the government commission responsible for developing policy recommendations on the use of artificial intelligence in U.S. national security and defense to begin opening up its operation to scrutiny by making its meetings and records available to the public.

U.S. District Judge Trevor McFadden of Washington, D.C., an appointee of President Donald Trump,reasoned in a20-page opinion that the National Security Commission on Artificial Intelligence (NSCAI) was subject to the Federal Advisory Committee Acts (FACA) forward-looking publication and access requirements, thereby putting an end to the commissions ability to operate largely in secret.

Today, the Court holds that Congress can and did impose Janus-like transparency obligations upon the AI Commission, Judge McFadden wrote in reference to the two-faced Roman god who looked into both the past and the future. No rule of law forced Congress to choose just one.

Established by Congress in 2018 and chaired by former Google CEO Eric Schmidt, the NSCAI is tasked with considering the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.

In a lawsuit filed last year, internet rights advocacy group EPIC (Electronic Privacy Information Center) said the Commission violating FACA by operating almost entirely in secret. According to the complaint, the NSCAI has held the vast majority of its meetings behind closed doors and has failed to publish or disclose any notices, agendas, minutes, or materials for those meetings.

Judge McFadden held that under FACA, the boards, councils, and commissions that furnish expert advice, ideas, and diverse opinions to the Federal Government, are also required to keep the public informed about their activities, and take affirmative steps to make their records public, even absent a request.

EPIC previously won a court ruling which declared that the NSCAI is subject to the Freedom of Information Act (FOIA), which forced the Commission to begin disclosing its past records upon request.

Read McFaddens full ruling below.

EPIC v NSC AI by Law&Crime on Scribd

[image via Alex Wong/Getty Images

Have a tip we should know? [emailprotected]

Read the original:
Federal Judge Orders National Security Commission on Artificial Intelligence to Open Meetings and Records to Public - Law & Crime

Thanks To Renewables And Machine Learning, Google Now Forecasts The Wind – Forbes

(Photo by Vitaly NevarTASS via Getty Images)

Wind farms have traditionally made less money for the electricity they produce because they have been unable to predict how windy it will be tomorrow.

The way a lot of power markets work is you have to schedule your assets a day ahead, said Michael Terrell, the head of energy market strategy at Google. And you tend to get compensated higher when you do that than if you sell into the market real-time.

Well, how do variable assets like wind schedule a day ahead when you don't know the wind is going to blow? Terrell asked, and how can you actually reserve your place in line?

We're not getting the full benefit and the full value of that power.

Heres how: Google and the Google-owned Artificial Intelligence firm DeepMind combined weather data with power data from 700 megawatts of wind energy that Google sources in the Central United States. Using machine learning, they have been able to better predict wind production, better predict electricity supply and demand, and as a result, reduce operating costs.

What we've been doing is working in partnership with the DeepMind team to use machine learning to take the weather data that's available publicly, actually forecast what we think the wind production will be the next day, and bid that wind into the day-ahead markets, Terrell said in a recent seminar hosted by the Stanford Precourt Institute of Energy. Stanford University posted video of the seminar last week.

The result has been a 20 percent increase in revenue for wind farms, Terrell said.

The Department of Energy listed improved wind forecasting as a first priority in its 2015 Wind Vision report, largely to improve reliability: Improve Wind Resource Characterization, the report said at the top of its list of goals. Collect data and develop models to improve wind forecasting at multiple temporal scalese.g., minutes, hours, days, months, years.

Googles goal has been more sweeping: to scrub carbon entirely from its energy portfolio, which consumes as much power as two San Franciscos.

Google achieved an initial milestone by matching its annual energy use with its annual renewable-energy procurement, Terrell said. But the company has not been carbon-free in every location at every hour, which is now its new goalwhat Terrell calls its 24x7 carbon-free goal.

We're really starting to turn our efforts in this direction, and we're finding that it's not something that's easy to do. It's arguably a moon shot, especially in places where the renewable resources of today are not as cost effective as they are in other places.

The scientists at London-based DeepMind have demonstrated that artificial intelligence can help by increasing the market viability of renewables at Google and beyond.

Our hope is that this kind of machine learning approach can strengthen the business case for wind power and drive further adoption of carbon-free energy on electric grids worldwide, said DeepMind program manager Sims Witherspoon and Google software engineer Carl Elkin. In a Deepmind blog post, they outline how they boosted profits for Googles wind farms in the Southwest Power Pool, an energy market that stretches across the plains from the Canadian border to north Texas:

Using a neural network trained on widely available weather forecasts and historical turbine data, we configured the DeepMind system to predict wind-power output 36 hours ahead of actual generation. Based on these predictions, our model recommends how to make optimal hourly delivery commitments to the power grid a full day in advance.

The DeepMind system predicts wind-power output 36 hours in advance, allowing power producers to make ... [+] more lucrative advance bids to supply power to the grid.

The rest is here:
Thanks To Renewables And Machine Learning, Google Now Forecasts The Wind - Forbes

UK guidance issued on explaining decisions made with artificial intelligence – Out-Law.com

The guidance looks at how organisations can provide users with a better understanding of how AI systems work and how decisions are made. It is intended to give organisations practical advice to help explain the processes and services that go into AI decision-making so that individuals will be better informed about the risks and rewards of AI.

The guidance follows the public consultation launched by the ICO and the Alan Turing Institute last year under their Project ExplAIn collaboration, and is part of a wider industry effort to improve accountability and transparency around AI.

Data protection expert Priya Jhakar of Pinsent Masons, the law firm behind Out-Law, said: The ICO's guidance will be a helpful tool for organisations navigating the challenges of explaining AI decision making. The practical nature of the guidance not only helps organisations understand the issues and risks associated with unexplainable decisions, but will also get organisations thinking about what they have to do at each level of their business to achieve explainability and demonstrate best practice.

The guidance is split into three parts, explaining the basics of AI before going on to give examples of explaining AI in practice, and looking at what explainable AI means for an organisation.

It includes detail on the roles, policies, procedures and documentation, that required by the EUs General Data Protection Regulation, that firms can put in place to ensure they are set up to provide meaningful explanations to affected individuals.

The guidance offers practical examples which put the recommendations into context and checklists to help organisations keep track of the processes and steps they are taking when explaining decisions made with AI. The ICO emphasises that the guidance is not a statutory code of practice under the Data Protection Act 2018.

The first section is aimed primarily at an organisations data protection officer (DPO) and compliance teams, but relevant to anyone involved in the development of AI systems. The second is aimed at technical teams and the last section at senior management. However, it suggests that DPOs and compliance teams may also find the last two sections helpful.

The guidance notes that using explainable AI can give an organisation better assurance of legal compliance, mitigating the risks associated with non-compliance. It suggests using explainable AI can help improve trust with individual customers.

The ICO acknowledged that organisations are concerned that explainability may disclose commercially sensitive material about how their AI systems and models work. However, it said the guidance did not require the disclosure of in-depth information such as an AI tools source code or algorithms.

Organisations which limit the detail of any disclosures should justify and document the reasons for this, according to the guidance.

The ICO recognises that use of third-party personal data could be a concern for organisations, but suggests this may not be an issue where they assess the risk to third-party personal data as part of a data protection impact assessment, and make justified and documented choices about the level of detail they should provide.

The guidance also recognises the risks associated in not explaining AI decisions, including regulatory action, reputational damage and disengaged customers.

The guidance recommends that organisations should divide explanations of AI into two categories: process-based explanations, giving information on the governance of an AI system across its design and deployment; and outcome-based explanations which outline what happened in the case of a particular decision.

It identifies six ways of explaining AI decisions, including giving explanations in an accessible and non-technical way and noting who customers should contact for a human review of a decision.

The guidance also recommends explanations which cover issues such as fairness, safety and performance, what data has been used in a particular decision, and what steps have been taken during the design and implementation of an AI system to consider and monitor the impacts that its use and decisions may have on an individual, and on wider society.

The guidance also identifies four principles for organisations to follow, and how they relate to each decision type. Organisations should be transparent, accountable, consider the context they operate in, and reflect on what impact of the AI system may have on affected individuals and wider society.

See the original post here:
UK guidance issued on explaining decisions made with artificial intelligence - Out-Law.com