Archive for June, 2020

VisionQuest Biomedical Inc. and The University of New Mexico Combine Artificial Intelligence and Infrared Imaging to Diagnose Early Signs of Diabetic…

ALBUQUERQUE, N.M.--(BUSINESS WIRE)-- VisionQuest Biomedical Inc. and theUniversity of New Mexico School of Medicine have been awarded a three-year $3 million grant from the National Institute Of Diabetes And Digestive And Kidney Diseases (NIDDK), part of the National Institutes of Health (NIH), to complete the clinical validation of a new technology to detect early signs of diabetic peripheral neuropathy (DPN), also known as diabetic foot.

Dr. Peter Soliz (PhD), founder and chief technology officer of VisionQuest, says, Our patented technology for detecting early signs of peripheral neuropathy will fundamentally change how physicians manage this severe complication of diabetes. This system will complement our already successful EyeStar system for the detection of diabetic retinopathy and will allow us to screen for multiple diabetes complications in one visit. Dr. Mark Burge (MD), deputy director of the University of New Mexicos Clinical & Translational Science Center (CTSC) and coprincipal investigator on this project, adds, A simple test that can be performed by the primary care physician in the clinic and which is highly sensitive and specific does not currently exist. This device will fill an important gap in providing comprehensive care to individuals diagnosed with diabetes.

Diabetes affects 34.2 million people in the United States, or 10.5 percent of the population. The Foundation for Peripheral Neuropathy estimates that over 70 percent of people diagnosed with diabetes have developed DPN, a painful complication of diabetes that leads to loss of sensation, foot ulcers, and nearly 54,000 amputations per year.

Current screening methods cannot reliably detect the early stages of DPN, when preventative care can improve outcomes. VisionQuests fully automated, noninvasive system analyzes real-time thermal video of changing temperatures on the bottom of the foot to produce highly sensitive and consistent measurements of blood flow that can be used for diagnosis in primary-care clinics and do not require interpretation. The device and technique were awarded a US patent in 2014.

Through this NIDDK grant, VisionQuest will complete the clinical validation needed to pursue clearance by the Food and Drug Administration (FDA) to bring the device to market in the United States. Now more than ever, VisionQuest is committed to expanding access to health care for populations living with chronic diseases in the United States and around the world.

About VisionQuest Biomedical Inc.: VisionQuest develops and delivers innovative artificial intelligencebased imaging technologies that increase access to health care for the people who need it the most. We serve patients and providers in the most efficient and cost-effective ways possible. Dr. Soliz founded VisionQuest in 2007 to develop AI techniques that could be used by health-care professionals to evaluate digital medical photographs, specifically retinal images that showed evidence of diabetic retinopathythe most common complication of diabetes and the leading cause of blindness in the working-age populationand other pathologies. In the United States, VisionQuest has established a network of clinics in which to study computer-based detection of retinal pathologies. In Mexico our EyeStar software is used to screen patients for diabetic retinopathy. In the sub-Saharan country of Malawi, VisionQuest is applying retinal screening to the detection of malarial retinopathy.

About The University of New Mexico Clinical & Translational Science Center: UNMs CTSC supports high quality collaborative translational science locally, regionally, and nationally; fosters scientific and operational innovation to improve the efficiency and effectiveness of clinical translational research; and creates, provides, and disseminates domain-specific translational science training and workforce development. It is committed to bettering health by streamlining science, transforming training environments, and improving the conduct, quality, and dissemination of research from laboratories to clinical practice, and out into communities. This prestigious designation ensures New Mexico remains a leader in the biomedical research field. It also fuels our culture of scientific discovery and its impacts on health.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200601005198/en/

The rest is here:
VisionQuest Biomedical Inc. and The University of New Mexico Combine Artificial Intelligence and Infrared Imaging to Diagnose Early Signs of Diabetic...

Federal Judge Orders National Security Commission on Artificial Intelligence to Open Meetings and Records to Public – Law & Crime

U.S. District Judge Trevor N. McFadden

In a victory for government transparency advocates, a federal judge on Monday ordered the government commission responsible for developing policy recommendations on the use of artificial intelligence in U.S. national security and defense to begin opening up its operation to scrutiny by making its meetings and records available to the public.

U.S. District Judge Trevor McFadden of Washington, D.C., an appointee of President Donald Trump,reasoned in a20-page opinion that the National Security Commission on Artificial Intelligence (NSCAI) was subject to the Federal Advisory Committee Acts (FACA) forward-looking publication and access requirements, thereby putting an end to the commissions ability to operate largely in secret.

Today, the Court holds that Congress can and did impose Janus-like transparency obligations upon the AI Commission, Judge McFadden wrote in reference to the two-faced Roman god who looked into both the past and the future. No rule of law forced Congress to choose just one.

Established by Congress in 2018 and chaired by former Google CEO Eric Schmidt, the NSCAI is tasked with considering the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.

In a lawsuit filed last year, internet rights advocacy group EPIC (Electronic Privacy Information Center) said the Commission violating FACA by operating almost entirely in secret. According to the complaint, the NSCAI has held the vast majority of its meetings behind closed doors and has failed to publish or disclose any notices, agendas, minutes, or materials for those meetings.

Judge McFadden held that under FACA, the boards, councils, and commissions that furnish expert advice, ideas, and diverse opinions to the Federal Government, are also required to keep the public informed about their activities, and take affirmative steps to make their records public, even absent a request.

EPIC previously won a court ruling which declared that the NSCAI is subject to the Freedom of Information Act (FOIA), which forced the Commission to begin disclosing its past records upon request.

Read McFaddens full ruling below.

EPIC v NSC AI by Law&Crime on Scribd

[image via Alex Wong/Getty Images

Have a tip we should know? [emailprotected]

Read the original:
Federal Judge Orders National Security Commission on Artificial Intelligence to Open Meetings and Records to Public - Law & Crime

Thanks To Renewables And Machine Learning, Google Now Forecasts The Wind – Forbes

(Photo by Vitaly NevarTASS via Getty Images)

Wind farms have traditionally made less money for the electricity they produce because they have been unable to predict how windy it will be tomorrow.

The way a lot of power markets work is you have to schedule your assets a day ahead, said Michael Terrell, the head of energy market strategy at Google. And you tend to get compensated higher when you do that than if you sell into the market real-time.

Well, how do variable assets like wind schedule a day ahead when you don't know the wind is going to blow? Terrell asked, and how can you actually reserve your place in line?

We're not getting the full benefit and the full value of that power.

Heres how: Google and the Google-owned Artificial Intelligence firm DeepMind combined weather data with power data from 700 megawatts of wind energy that Google sources in the Central United States. Using machine learning, they have been able to better predict wind production, better predict electricity supply and demand, and as a result, reduce operating costs.

What we've been doing is working in partnership with the DeepMind team to use machine learning to take the weather data that's available publicly, actually forecast what we think the wind production will be the next day, and bid that wind into the day-ahead markets, Terrell said in a recent seminar hosted by the Stanford Precourt Institute of Energy. Stanford University posted video of the seminar last week.

The result has been a 20 percent increase in revenue for wind farms, Terrell said.

The Department of Energy listed improved wind forecasting as a first priority in its 2015 Wind Vision report, largely to improve reliability: Improve Wind Resource Characterization, the report said at the top of its list of goals. Collect data and develop models to improve wind forecasting at multiple temporal scalese.g., minutes, hours, days, months, years.

Googles goal has been more sweeping: to scrub carbon entirely from its energy portfolio, which consumes as much power as two San Franciscos.

Google achieved an initial milestone by matching its annual energy use with its annual renewable-energy procurement, Terrell said. But the company has not been carbon-free in every location at every hour, which is now its new goalwhat Terrell calls its 24x7 carbon-free goal.

We're really starting to turn our efforts in this direction, and we're finding that it's not something that's easy to do. It's arguably a moon shot, especially in places where the renewable resources of today are not as cost effective as they are in other places.

The scientists at London-based DeepMind have demonstrated that artificial intelligence can help by increasing the market viability of renewables at Google and beyond.

Our hope is that this kind of machine learning approach can strengthen the business case for wind power and drive further adoption of carbon-free energy on electric grids worldwide, said DeepMind program manager Sims Witherspoon and Google software engineer Carl Elkin. In a Deepmind blog post, they outline how they boosted profits for Googles wind farms in the Southwest Power Pool, an energy market that stretches across the plains from the Canadian border to north Texas:

Using a neural network trained on widely available weather forecasts and historical turbine data, we configured the DeepMind system to predict wind-power output 36 hours ahead of actual generation. Based on these predictions, our model recommends how to make optimal hourly delivery commitments to the power grid a full day in advance.

The DeepMind system predicts wind-power output 36 hours in advance, allowing power producers to make ... [+] more lucrative advance bids to supply power to the grid.

The rest is here:
Thanks To Renewables And Machine Learning, Google Now Forecasts The Wind - Forbes

UK guidance issued on explaining decisions made with artificial intelligence – Out-Law.com

The guidance looks at how organisations can provide users with a better understanding of how AI systems work and how decisions are made. It is intended to give organisations practical advice to help explain the processes and services that go into AI decision-making so that individuals will be better informed about the risks and rewards of AI.

The guidance follows the public consultation launched by the ICO and the Alan Turing Institute last year under their Project ExplAIn collaboration, and is part of a wider industry effort to improve accountability and transparency around AI.

Data protection expert Priya Jhakar of Pinsent Masons, the law firm behind Out-Law, said: The ICO's guidance will be a helpful tool for organisations navigating the challenges of explaining AI decision making. The practical nature of the guidance not only helps organisations understand the issues and risks associated with unexplainable decisions, but will also get organisations thinking about what they have to do at each level of their business to achieve explainability and demonstrate best practice.

The guidance is split into three parts, explaining the basics of AI before going on to give examples of explaining AI in practice, and looking at what explainable AI means for an organisation.

It includes detail on the roles, policies, procedures and documentation, that required by the EUs General Data Protection Regulation, that firms can put in place to ensure they are set up to provide meaningful explanations to affected individuals.

The guidance offers practical examples which put the recommendations into context and checklists to help organisations keep track of the processes and steps they are taking when explaining decisions made with AI. The ICO emphasises that the guidance is not a statutory code of practice under the Data Protection Act 2018.

The first section is aimed primarily at an organisations data protection officer (DPO) and compliance teams, but relevant to anyone involved in the development of AI systems. The second is aimed at technical teams and the last section at senior management. However, it suggests that DPOs and compliance teams may also find the last two sections helpful.

The guidance notes that using explainable AI can give an organisation better assurance of legal compliance, mitigating the risks associated with non-compliance. It suggests using explainable AI can help improve trust with individual customers.

The ICO acknowledged that organisations are concerned that explainability may disclose commercially sensitive material about how their AI systems and models work. However, it said the guidance did not require the disclosure of in-depth information such as an AI tools source code or algorithms.

Organisations which limit the detail of any disclosures should justify and document the reasons for this, according to the guidance.

The ICO recognises that use of third-party personal data could be a concern for organisations, but suggests this may not be an issue where they assess the risk to third-party personal data as part of a data protection impact assessment, and make justified and documented choices about the level of detail they should provide.

The guidance also recognises the risks associated in not explaining AI decisions, including regulatory action, reputational damage and disengaged customers.

The guidance recommends that organisations should divide explanations of AI into two categories: process-based explanations, giving information on the governance of an AI system across its design and deployment; and outcome-based explanations which outline what happened in the case of a particular decision.

It identifies six ways of explaining AI decisions, including giving explanations in an accessible and non-technical way and noting who customers should contact for a human review of a decision.

The guidance also recommends explanations which cover issues such as fairness, safety and performance, what data has been used in a particular decision, and what steps have been taken during the design and implementation of an AI system to consider and monitor the impacts that its use and decisions may have on an individual, and on wider society.

The guidance also identifies four principles for organisations to follow, and how they relate to each decision type. Organisations should be transparent, accountable, consider the context they operate in, and reflect on what impact of the AI system may have on affected individuals and wider society.

See the original post here:
UK guidance issued on explaining decisions made with artificial intelligence - Out-Law.com

How artificial intelligence will change the future of work – JAXenter

AI is developing at whirlwind rates. While nobody can say for certain how it will impact our work and personal lives, we can make a good few educated guesses. Also, with COVID-19 limiting human interaction in the built environment, advancements in AI and automation are on course to accelerate (providing funding is available, of course).

The age-old fear among some of the population is that AI will displace workers, leading to high levels of unemployment. A report by management consulting firm McKinsey shows that between 400 million and 800 million individuals across the globe could be replaced by automation and need to find new jobs by 2030.

However, AI could also create more jobs, as long as people are willing to adapt and work smarter. Research by PwC suggests that AI will add more to global GDP by 2030 than the combined current output of China and India.

So, in what ways could artificial intelligence change the future of work?

SEE ALSO: 5 key reasons why businesses fail in DevOps implementation

The virtual communication technologies being developed currently will dramatically enhance the way we experience remote working. Widespread access to WiFi and portable devices have led to an increase in dispersed teams. Companies are replacing their traditional offices with virtual offices, enabling them to access global talent.

Holographic transportation can imitate the physical face-to-face interactions that add value to our workplace experience; the things we usually miss out on when telecommuting. In place of video conferencing screens, augmented reality allows us to collaborate in real time with our coworkers through 3D holographic images and avatars.

Check out Microsofts Spatial app for more insight.

Advancements in telerobotics have given humans the ability to operate machines remotely. This area of technology could also give rise to ubiquitous remote working; when teamed with holographic transportation, it could change how we work forever. Telerobotics is facilitated by broadband communications, sensors, Internet of Things (IoT) technologies. 5G and Mobile Edge Computing (MEC) will accelerate the adoption of telerobotics and teleoperation.

AI and machine learning are already changing the way we recruit employees. Technology enables us to analyse thousands of profiles and compile a list of relevant candidates efficiently. Following the shortlisting process, AI technology can be used to communicate with candidates and keep them engaged at every stage of the recruitment journey.

There are lots of AI recruitment tools out there today that help businesses hire remote workers. Users can assess a candidates skillset, get an insight into their personality, and even gauge to some extent whether or not they will fit with the culture of the company. Some solutions deliver online assessments to candidates and use AI to grade them. Facial recognition technology is used to detect any cheating.

Once the right candidate has been chosen, AI-enabled chatbots can be used (alongside human intervention) to facilitate the onboarding process, helping new starters understand everything from internal processes to the company culture.

AI also has the potential to minimise bias when it comes to recruitment and performance reviews, as candidates are assessed in a more fact-based way. It can also help HR professionals to pinpoint areas of bias in the company and resolve it efficiently. As a result, AI has the potential to make our virtual workplaces more inclusive and diverse.

AI can also be used to upskill new employees and indeed minimise the skills gap. Multinational engineering, industrial, and aerospace conglomerate, Honeywell, has developed a simulator for training purposes. Their solution, which helps reduce training time by 60%, enables the user to simulate tasks through virtual environments which are accessed through the cloud.

SEE ALSO: 5 Ways investing in simplicity will make your IT project a success

When artificial intelligence teams up with the Internet of Things, trend prediction can be done quickly, making businesses more efficient, sustainable, and effective. In time, it will also change the way companies are run, with humans collaborating with AI brains to solve complex problems. (Yes, there will still be a need for human input.)

As well as trend mapping, AI will make it easier for businesses to accurately identify any challenges. Businesses utilising AI and data (responsibly) could also significantly improve the customer and employee experience. Workers will have more time to focus on creatively fulfilling rather than repetitive tasks that machines will do. As a result, HR teams will be able to focus on more strategic work.

There are tools available that use robotic process automation (RPA) to monitor workflows and make informed/ intelligent suggestions as to how tasks can be managed more effectively. They are able to identify when an individual is struggling with a problem and can provide assistance or point the worker in the right direction for human help.

Today and in the foreseeable future at least, AI in the context of work is all about complimenting and maximising human input as opposed to replacing it. Its about eliminating the mundane and freeing us up to focus on the creative things only humans can do.

The rest is here:
How artificial intelligence will change the future of work - JAXenter